[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\ndata/\nvar/\nlog/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\n\n\n# PyInstaller\n# 通常如果您使用PyInstaller，以下目录应该被忽略\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n.pytest_cache/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n.idea\n.vscode\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# pytype static type analyzer\n.pytype/\n\n# Cython debug symbols\ncython_debug/\n\n*.log\nexperiment_results/\ncollected_results/\ndemo/comparison/frames/\ndemo/comparison/gifs/\ndemo/comparison/pngs/\ndemo/comparison/htmls/\ndemo/draw/\ndemo/importances/pngs/\n```\n\n**/__pycache__/\n\nrun.sh\nrun1.sh\nfetch_data.py\ntest.sh\nsample_points.py"
  },
  {
    "path": "LICENSE",
    "content": "BSD 3-Clause License\n\nCopyright (c) 2023, peilimao\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this\n   list of conditions and the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice,\n   this list of conditions and the following disclaimer in the documentation\n   and/or other materials provided with the distribution.\n\n3. Neither the name of the copyright holder nor the names of its\n   contributors may be used to endorse or promote products derived from\n   this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "MANIFEST.in",
    "content": ""
  },
  {
    "path": "README.md",
    "content": "<p align=\"center\">\n  <a href=\"https://maopl.github.io/TransOpt-doc/\">\n    <img src=\"./docs/source/_static/figures/transopt_logo.jpg\" alt=\"\" width=\"40%\" align=\"top\">\n  </a>\n</p>\n<p align=\"center\">\n  TransOPT: Transfer Optimization System for Bayesian Optimization Using Transfer Learning<br>\n  <a href=\"https://maopl.github.io/TransOpt-doc/\">Docs</a> |\n  <a href=\"https://maopl.github.io/TransOpt-doc/quickstart.html\">Tutorials</a> |\n  <a href=\"https://maopl.github.io/TransOpt-doc/usage/problems.html\">Examples</a> |\n  <a href=\"\">Paper</a> |\n  <a href=\"https://maopl.github.io/TransOpt-doc\">Citation</a> |\n</p>\n\n<div align=\"center\">\n\n<a href=\"https://github.com/psf/black\"><img alt=\"Code style: black\" src=\"https://img.shields.io/badge/code%20style-black-000000.svg\"></a>\n<a href=\"https://github.com/psf/black\"><img alt=\"Code style: black\" src=\"https://img.shields.io/badge/python_version-3.10-purple\"></a>\n\n</div>\n\n\n# Welcome to TransOPT!\n\n**TransOPT** is an open-source software platform designed to facilitate the **design, benchmarking, and application of transfer learning for Bayesian optimization (TLBO)** algorithms through a modular, data-centric framework.\n\n## Features\n\n- **Contains more than 1000 benchmark problems covers diverse range of domains**.  \n- **Build custom optimization algorithms as easily as stacking building blocks**.  \n- **Leverage historical data to achieve more efficient and informed optimization**.  \n- **Deploy experiments through an intuitive web UI and monitor results in real-time**.\n\nTransOPT empowers researchers and developers to explore innovative optimization solutions effortlessly, bridging the gap between theory and practical application.\n\n# [Installation: how to install TransOPT](https://maopl.github.io/TransOpt-doc/installation.html)\n\nTransOPT is composed of two main components: the backend for data processing and business logic, and the frontend for user interaction. Each can be installed as follows:\n\n### Prerequisites\n\nBefore installing TransOPT, you must have the following installed:\n\n- **Python 3.10+**: Ensure Python is installed.\n- **Node.js 17.9.1+ and npm 8.11.0+**: These are required to install and build the frontend. [Download Node.js](https://nodejs.org/en/download/)\n\nPlease install these prerequisites if they are not already installed on your system.\n\n1. Clone the repository:\n   ```shell\n   $ git clone https://github.com/maopl/TransOpt.git\n   ```\n\n2. Install the required dependencies:\n   ```shell\n   $ cd TransOpt\n   $ python setup.py install\n   ```\n\n3. Install the frontend dependencies:\n   ```shell\n   $ cd webui && npm install\n   ```\n\n### Start the Backend Agent\n\nTo start the backend agent, use the following command:\n\n```bash\n$ python transopt/agent/app.py\n```\n\n### Web User Interface Mode\n\nWhen TransOPT has been started successfully, go to the webui directory and start the web UI on your local machine. Enable the user interface mode with the following command:\n```bash\ncd webui && npm start\n```\n\nThis will open the TransOPT interface in your default web browser at `http://localhost:3000`.\n\n\n### Command Line Mode\n\nIn addition to the web UI mode, TransOPT also offers a Command Line (CMD) mode for users who may not have access to a display screen, such as when working on a remote server.\n\nTo run TransOPT in CMD mode, use the following command:\n\n```bash\npython transopt/agent/run_cli.py -n Sphere -v 3 -o 1 -m RF -acf UCB -b 300\n```\n\nThis command sets up a task named Sphere with 3 variables and 1 objectives, using a Random Forest model (RF) as surrogate model and the upper confidence bound (UCB) acquisition function, with a budget of 300 function evaluations.\n\nFor a complete list of available options and more detailed usage instructions, please refer to the [CLI documentation](https://maopl.github.io/TransOpt-doc/usage/cli.html).\n\n\n# [Documentation: The TransOPT Process](https://maopl.github.io/TransOpt-doc/)\n\nOur docs walk you through using TransOPT, web UI and key API points. For an overview of the system and workflow for project management, see our documentation [documentation](https://maopl.github.io/TransOpt-doc/).\n\n\n<p align=\"center\">\n<img src=\"./docs/source/_static/figures/Transopt_workflow.png\" width=\"95%\">\n</p>\n\n\n# Why use TransOPT?\n\nRecent years, Bayesian optimization (BO) has been widely used in various fields, such as hyperparameter optimization, molecular design, and synthetic biology. However, conventional BO is not that efficient, where it conduct every optimization task from scratch while ignoring the experiences gained from previous problem-solving practices. To address this challenge, transfer learning (TL) has been introduced to BO, aiming to leverage auxillary data to improve the optimization efficiency and performance. Despite the potential of TLBO, the usage of TLBO is still limited due to the complexity of advanced TLBO methods. TransOPT, a system that facilitates:\n\n- development of TLBO algorithms;\n- benchmarking the performance of TLBO methods;\n- applications of TLBO for downstream tasks;\n\n<p align=\"center\">\n<img src=\"./docs/source/_static/figures/Results.png\" width=\"95%\">\n</p>\n\n**Upper-left:** illustrates the use of a web UI to construct new optimization algorithms by combining different components. **Upper-right:** highlights the application of an LLM agent to effectively manage optimization tasks. **Middle:** shows various visualization results derived from the optimization processes. **Lower:** presents a performance comparison of different TLBO methods.\n\n\n\n\n\n\n# Reference & Citation\n\nIf you find our work helpful to your research, please consider citing our:\n\n```bibtex\n@article{TransOPT,\n  title = {{TransOPT}: Transfer Optimization System for Bayesian Optimization Using Transfer Learning},\n  author = {Author Name and Collaborator Name},\n  url = {https://github.com/maopl/TransOPT},\n  year = {2024}\n}\n```\n\n\n\n"
  },
  {
    "path": "demo/analysis.py",
    "content": "import logging\nimport os\nimport argparse\n\nfrom pathlib import Path\nfrom transopt.ResultAnalysis.AnalysisPipeline import analysis_pipeline\n\n\ndef run_analysis(Exper_folder:Path, tasks, methods, seeds, args):\n    logger = logging.getLogger(__name__)\n    analysis_pipeline(Exper_folder, tasks=tasks, methods=methods, seeds=seeds, args=args)\n\n\n\n\nif __name__ == '__main__':\n    tasks = {\n        # 'cp': {'budget': 8, 'time_stamp': 2, 'params': {'input_dim': 2}},\n        'Ackley': {'budget': 11, 'time_stamp': 3, 'params':{'input_dim':1}},\n        # 'MPB': {'budget': 110, 'time_stamp': 3},\n        # 'Griewank': {'budget': 11, 'time_stamp': 3,  'params':{'input_dim':1}},\n        # 'DixonPrice': {'budget': 110, 'time_stamp': 3},\n        # 'Lunar': {'budget': 110, 'time_stamp': 3},\n        # 'XGB': {'budget': 110, 'time_stamp': 3},\n    }\n    Methods_list = {'MTBO', 'BO'}\n    Seeds = [1,2,3,4,5]\n\n    parser = argparse.ArgumentParser(description='Process some integers.')\n    parser.add_argument(\"-in\", \"--init_number\", type=int, default=0)\n    parser.add_argument(\"-p\", \"--exp_path\", type=str, default='../LFL_experiments')\n    parser.add_argument(\"-n\", \"--exp_name\", type=str, default='test')  # 实验名称，保存在experiments中\n    parser.add_argument(\"-c\", \"--comparision\", type=bool, default=True)\n    parser.add_argument(\"-a\", \"--track\", type=bool, default=True)\n    parser.add_argument(\"-r\", \"--report\", type=bool, default=False)\n\n\n    args = parser.parse_args()\n    Exp_name = args.exp_name\n    Exp_folder = args.exp_path\n    Exper_folder = '{}/{}'.format(Exp_folder, Exp_name)\n    Exper_folder = Path(Exper_folder)\n    run_analysis(Exper_folder, tasks=tasks, methods=Methods_list, seeds = Seeds, args=args)\n\n"
  },
  {
    "path": "demo/causal_analysis.py",
    "content": "import logging\nimport os\nimport argparse\n\nfrom pathlib import Path\nfrom transopt.ResultAnalysis.AnalysisPipeline import analysis_pipeline\n\n\ndef run_analysis(Exper_folder:Path, tasks, methods, seeds, args):\n    logger = logging.getLogger(__name__)\n    analysis_pipeline(Exper_folder, tasks=tasks, methods=methods, seeds=seeds, args=args)\n\n\n\nif __name__ == '__main__':\n    tasks = {\n        \"GCC\": {\"budget\": samples_num, \"workloads\": workloads},\n        \"LLVM\": {\"budget\": samples_num, \"workloads\": workloads},\n    }\n\n    available_workloads = CompilerBenchmarkBase.AVAILABLE_WORKLOADS\n    split_workloads = split_into_segments(available_workloads, 10)\n\n    if split_index >= len(split_workloads):\n        raise IndexError(\"split index out of range\")\n\n    workloads = split_workloads[split_index]\n\n    tasks = {\n        \"GCC\": {\"budget\": samples_num, \"workloads\": workloads},\n        \"LLVM\": {\"budget\": samples_num, \"workloads\": workloads},\n    }\n    Methods_list = {'MTBO', 'BO'}\n    Seeds = [1,2,3,4,5]\n\n    parser = argparse.ArgumentParser(description='Process some integers.')\n    parser.add_argument(\"-in\", \"--init_number\", type=int, default=0)\n    parser.add_argument(\"-p\", \"--exp_path\", type=str, default='../LFL_experiments')\n    parser.add_argument(\"-n\", \"--exp_name\", type=str, default='test')  # 实验名称，保存在experiments中\n    parser.add_argument(\"-c\", \"--comparision\", type=bool, default=True)\n    parser.add_argument(\"-a\", \"--track\", type=bool, default=True)\n    parser.add_argument(\"-r\", \"--report\", type=bool, default=False)\n\n\n    args = parser.parse_args()\n    Exp_name = args.exp_name\n    Exp_folder = args.exp_path\n    Exper_folder = '{}/{}'.format(Exp_folder, Exp_name)\n    Exper_folder = Path(Exper_folder)\n    run_analysis(Exper_folder, tasks=tasks, methods=Methods_list, seeds = Seeds, args=args)\n\n"
  },
  {
    "path": "demo/comparison/analysis_hypervolume.py",
    "content": "import sys\nfrom pathlib import Path\n\ncurrent_path = Path(__file__).resolve().parent\npackage_path = current_path.parent.parent\nsys.path.insert(0, str(package_path))\n\nimport json\n\nimport numpy as np\nimport pandas as pd\nimport scipy.stats\n\nfrom transopt.utils.pareto import calc_hypervolume, find_pareto_front\nfrom transopt.utils.plot import plot3D\n\ntarget = \"gcc\"\n\nresults_path = package_path / \"experiment_results\"\ngcc_results = results_path / \"gcc_archive_new\"\nllvm_results = results_path / \"llvm_archive\"\n\nalgorithm_list = [\"ParEGO\", \"SMSEGO\", \"MoeadEGO\", \"CauMO\"]\nobjectives = [\"execution_time\", \"file_size\", \"compilation_time\"]\nseed_list = [65535, 65536, 65537, 65538, 65539]\n\n\ndef load_and_prepare_data(file_path, objectives):\n    \"\"\"\n    Loads JSON data and prepares a DataFrame.\n    \"\"\"\n    # print(f\"Loading data from {file_path}\")\n    with open(file_path, \"r\") as f:\n        data = json.load(f)\n        data = data.get(\"1\", {})\n\n    input_vectors = data[\"input_vector\"]\n    output_vectors = data[\"output_value\"]\n\n    df_input = pd.DataFrame(input_vectors)\n\n    df_output = pd.DataFrame(output_vectors)[objectives]\n    df_combined = pd.concat([df_input, df_output], axis=1)\n    # print(f\"Loaded {len(df_combined)} data points\")\n\n    df_combined = df_combined.drop_duplicates(subset=df_input.columns.tolist())\n\n    for obj in objectives:\n        df_combined = df_combined[df_combined[obj] != 1e10]\n\n    # print(f\"Loaded {len(df_combined)} data points, removed {len(df_input) - len(df_combined)} duplicates\")\n    # print()\n    return df_combined\n\ndef load_data(workload, algorithm, seed):\n    if target == \"llvm\":\n        result_file = llvm_results / f\"llvm_{workload}\" / algorithm / f\"{seed}_KB.json\"\n    else:\n        result_file = gcc_results / f\"gcc_{workload}\" / algorithm / f\"{seed}_KB.json\"\n    df = load_and_prepare_data(result_file, objectives)\n    return df\n\ndef collect_all_data(workload):\n    all_data = []\n    for algorithm in algorithm_list:\n        for seed in seed_list:\n            df = load_data(workload, algorithm, seed)\n            all_data.append(df[objectives].values)\n    all_data = np.vstack(all_data)\n    global_mean = all_data.mean(axis=0)\n    global_std = all_data.std(axis=0)\n    return all_data, global_mean, global_std\n\n\ndef calculate_mean_hypervolume(\n    algorithm, workload, global_stats1, global_stats2, normalization_type=\"min-max\"\n):\n    \"\"\"\n    Calculate mean hypervolume for a given algorithm across all seeds.\n\n    Parameters:\n    global_stats1: Global mean or min of all objectives (depending on normalization_type)\n    global_stats2: Global std or max of all objectives (depending on normalization_type)\n    normalization_type: 'min-max' or 'mean' for different types of normalization\n    \"\"\"\n    hypervolume_list = []\n    for seed in seed_list:\n        df = load_data(workload, algorithm, seed)\n\n        if normalization_type == \"mean\":\n            # Apply mean normalization\n            normalized_df = (df[objectives] - global_stats1) / global_stats2\n        elif normalization_type == \"min-max\":\n            # Apply min-max normalization\n            normalized_df = (df[objectives] - global_stats1) / (\n                global_stats2 - global_stats1\n            )\n        else:\n            raise ValueError(\n                \"Unsupported normalization type. Choose 'mean' or 'min-max'.\"\n            )\n\n        pareto_front = find_pareto_front(normalized_df.values)\n        hypervolume = calc_hypervolume(pareto_front, np.ones(len(objectives)))\n        # print(f\"{algorithm} {seed} {hypervolume}\")\n        hypervolume_list.append(hypervolume)\n\n    return np.mean(hypervolume_list)\n\n\ndef calculate_hypervolumes(\n    algorithm, workload, global_stats1, global_stats2, normalization_type=\"min-max\"\n):\n    \"\"\"\n    Calculate hypervolumes for a given algorithm across all seeds.\n\n    Parameters:\n    global_stats1: Global mean or min of all objectives (depending on normalization_type)\n    global_stats2: Global std or max of all objectives (depending on normalization_type)\n    normalization_type: 'min-max' or 'mean' for different types of normalization\n    \"\"\"\n    hypervolume_list = []\n    for seed in seed_list:\n        df = load_data(workload, algorithm, seed)\n\n        if normalization_type == \"mean\":\n            normalized_df = (df[objectives] - global_stats1) / global_stats2\n        elif normalization_type == \"min-max\":\n            normalized_df = (df[objectives] - global_stats1) / (global_stats2 - global_stats1)\n        else:\n            raise ValueError(\"Unsupported normalization type. Choose 'mean' or 'min-max'.\")\n\n        pareto_front = find_pareto_front(normalized_df.values)\n        hypervolume = calc_hypervolume(pareto_front, np.ones(len(objectives)))\n        hypervolume_list.append(hypervolume)\n\n    return hypervolume_list\n\ndef analyze_and_compare_algorithms(workload_results):\n    analysis_results = {}\n\n    for workload, algorithms in workload_results.items():\n        workload_analysis = {\n            'means': {},\n            'std_devs': {},\n            'significance': {}\n        }\n\n        # 计算每种算法的平均超体积和标准差，并找出最佳算法\n        best_algorithm = None\n        best_mean_hv = -float('inf')\n        for algorithm, hypervolumes in algorithms.items():\n            mean_hv = np.mean(hypervolumes)\n            workload_analysis['means'][algorithm] = mean_hv\n            workload_analysis['std_devs'][algorithm] = np.std(hypervolumes)\n\n            if mean_hv > best_mean_hv:\n                best_mean_hv = mean_hv\n                best_algorithm = algorithm\n\n        # 对每个算法进行显著性检验，只与最佳算法比较\n        for algorithm, hypervolumes in algorithms.items():\n            if algorithm != best_algorithm:\n                stat, p_value = scipy.stats.mannwhitneyu(algorithms[best_algorithm], hypervolumes)\n                comparison_key = f\"{algorithm} vs {best_algorithm}\"\n                workload_analysis['significance'][comparison_key] = ('+' if p_value < 0.05 else '-')\n                \n        # # 进行算法间的显著性检验\n        # algorithm_names = list(algorithms.keys())\n        # for i in range(len(algorithm_names)):\n        #     for j in range(i+1, len(algorithm_names)):\n        #         hypervolumes1 = algorithms[algorithm_names[i]]\n        #         hypervolumes2 = algorithms[algorithm_names[j]]\n        #         stat, p_value = scipy.stats.mannwhitneyu(hypervolumes1, hypervolumes2)\n        #         comparison_key = f\"{algorithm_names[i]} vs {algorithm_names[j]}\"\n        #         workload_analysis['significance'][comparison_key] = ('+' if p_value < 0.05 else '-')\n\n        analysis_results[workload] = workload_analysis\n\n    return analysis_results\n\ndef matrix_to_latex(analysis_results, caption):\n    latex_code = []\n\n    # 添加文档类和宏包\n    latex_code.extend([\n        \"\\\\documentclass{article}\",\n        \"\\\\usepackage{geometry}\",\n        \"\\\\geometry{a4paper, margin=1in}\",\n        \"\\\\usepackage{graphicx}\",\n        \"\\\\usepackage{colortbl}\",\n        \"\\\\usepackage{booktabs}\",\n        \"\\\\usepackage{threeparttable}\",\n        \"\\\\usepackage{caption}\",\n        \"\\\\usepackage{xcolor}\",\n        \"\\\\pagestyle{empty}\",\n        \"\\\\begin{document}\",\n        \"\\\\begin{table*}[t!]\",\n        \"    \\\\scriptsize\",\n        \"    \\\\centering\",\n        f\"    \\\\caption{{{caption}}}\",\n        \"    \\\\resizebox{1.0\\\\textwidth}{!}{\",\n        \"    \\\\begin{tabular}{c|\" + \"\".join([\"c\"] * len(analysis_results)) + \"}\",\n        \"        \\\\hline\"\n    ])\n\n    # 确定算法列表\n    algorithms = list(analysis_results[next(iter(analysis_results))]['means'].keys())\n\n    # 添加列名（每个算法一个列）\n    col_header = \" & \".join([\"\"] + [f\"\\\\texttt{{{algorithm}}}\" for algorithm in algorithms]) + \" \\\\\\\\\"\n    latex_code.append(\"        \" + col_header)\n    latex_code.append(\"        \\\\hline\")\n\n    # 添加行\n    for workload in analysis_results.keys():\n        row_data = [workload]\n        best_algorithm = max(analysis_results[workload]['means'], key=analysis_results[workload]['means'].get)\n        for algorithm in analysis_results[workload]['means'].keys():\n            mean = analysis_results[workload]['means'][algorithm]\n            std_dev = analysis_results[workload]['std_devs'][algorithm]\n            significance_mark = \"\"\n\n            if algorithm != best_algorithm:\n                for other_algorithm, sig_value in analysis_results[workload]['significance'].items():\n                    if algorithm in other_algorithm and sig_value == '+':\n                        significance_mark = \"$^\\\\dagger$\"\n                        break\n\n            if algorithm == best_algorithm:\n                row_data.append(f\"\\\\cellcolor[rgb]{{.682, .667, .667}}\\\\textbf{{{mean:.3f} (±{std_dev:.3f})}}{significance_mark}\")\n            else:\n                row_data.append(f\"{mean:.3f} (±{std_dev:.3f}){significance_mark}\")\n\n        latex_code.append(\"        \" + \" & \".join(row_data) + \" \\\\\\\\\")\n\n    # 添加表注\n    latex_code.extend([\n        \"        \\\\hline\",\n        \"    \\\\end{tabular}\",\n        \"    }\",\n        \"    \\\\begin{tablenotes}\",\n        \"        \\\\tiny\",\n        \"        \\\\item $^\\\\dagger$ indicates that the best algorithm is significantly better than the other one according to the Wilcoxon signed-rank test at a 5\\\\% significance level.\"\n        \"    \\\\end{tablenotes}\",\n        \"\\\\end{table*}%\",\n        \"\\\\end{document}\"\n    ])\n    \n        # latex_code.append(\"        \" + \" & \".join(row_data)\n                          \n    # # 添加列名\n    # col_header = \" & \".join([\"\"] + list(analysis_results.keys())) + \" \\\\\\\\\"\n    # latex_code.append(\"        \" + col_header)\n    # latex_code.append(\"        \\\\hline\")\n\n    # # 添加行\n    # for algorithm in analysis_results[next(iter(analysis_results))]['means'].keys():\n    #     row_data = [f\"\\\\texttt{{{algorithm}}}\"]\n    #     for workload, results in analysis_results.items():\n    #         mean = results['means'][algorithm]\n    #         std_dev = results['std_devs'][algorithm]\n    #         significance_mark = \"\"\n\n    #         for other_algorithm, sig_value in results['significance'].items():\n    #             if algorithm in other_algorithm and sig_value == '+':\n    #                 significance_mark = \"$^\\\\dagger$\"\n    #                 break\n\n    #         row_data.append(f\"{mean:.3f} (±{std_dev:.3f}){significance_mark}\")\n    #     latex_code.append(\"        \" + \" & \".join(row_data) + \" \\\\\\\\\")\n        \n    return \"\\n\".join(latex_code)\n\n\n\ndef load_workloads():\n    file_path = package_path / \"demo\" / \"comparison\" / f\"features_by_workload_{target}.json\"\n    with open(file_path, \"r\") as f:\n        return json.load(f).keys()\n\n\nif __name__ == \"__main__\":\n    workloads = load_workloads()\n\n    workloads = list(workloads)\n    workloads.sort()\n    workloads = workloads[:14]\n    \n    # workloads = [\n    #     \"cbench-automotive-qsort1\",\n    #     \"cbench-automotive-susan-e\",\n    #     \"cbench-network-patricia\",\n    #     \"cbench-automotive-bitcount\",\n    #     \"cbench-bzip2\",\n    #     \"cbench-telecom-adpcm-d\",\n    #     \"cbench-office-stringsearch2\",\n    #     \"cbench-security-rijndael\",\n    #     \"cbench-security-sha\",\n    # ]\n\n    workload_results = {}\n    for workload in workloads:\n        print(f\"Processing workload: {workload}\")\n        all_data, global_mean, global_std = collect_all_data(workload)\n        global_max = all_data.max(axis=0)\n        global_min = all_data.min(axis=0)\n\n        algorithm_results = {}\n        for algorithm in algorithm_list:\n            hypervolumes = calculate_hypervolumes(\n                algorithm,\n                workload,\n                global_min,\n                global_max,\n                normalization_type=\"min-max\",\n            )\n            algorithm_results[algorithm] = hypervolumes\n        \n        # Remove the prefix from the workload name \"\"\n        workload_short_name = workload[7:]\n        workload_results[workload_short_name] = algorithm_results\n\n    final_results = analyze_and_compare_algorithms(workload_results)\n    print(final_results)\n\n    caption = \"Perfomance Comparison of Algorithms\"\n    latex_table = matrix_to_latex(final_results, caption)\n\n    latex_table_path = \"latex_table.tex\"\n    with open(latex_table_path, 'w') as file:\n        file.write(latex_table)\n        \n    # for workload in workloads:\n    #     print(workload)\n    #     all_data, global_mean, global_std = collect_all_data(workload)\n    #     global_max = all_data.max(axis=0)\n    #     global_min = all_data.min(axis=0)\n\n    #     hv_list = []\n    #     for algorithm in algorithm_list:\n    #         mean_hypervolume = calculate_mean_hypervolume(\n    #             algorithm,\n    #             workload,\n    #             global_min,\n    #             global_max,\n    #             normalization_type=\"min-max\",\n    #         )\n    #         hv_list.append((algorithm, mean_hypervolume))\n\n    #     # Sort by hypervolume\n    #     hv_list.sort(key=lambda x: x[1], reverse=True)\n\n    #     print(hv_list)\n    #     print()\n"
  },
  {
    "path": "demo/comparison/analysis_plot.py",
    "content": "import sys\nfrom pathlib import Path\n\ncurrent_path = Path(__file__).resolve().parent\npackage_path = current_path.parent.parent\nsys.path.insert(0, str(package_path))\n\nimport json\nimport os\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n# import plotly.graph_objects as go\nfrom matplotlib.animation import FuncAnimation\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom transopt.utils.pareto import calc_hypervolume, find_pareto_front\nfrom transopt.utils.plot import plot3D\n\ntarget = \"gcc\"\nresults_path = package_path / \"experiment_results\"\ngcc_results_path = results_path / \"gcc_comparsion\"\ngcc_samples_path = results_path / \"gcc_samples\"\nllvm_results = results_path / \"llvm_comparsion\"\nllvm_samples_path = results_path / \"llvm_samples\"\n\ndbms_samples_path = results_path / \"dbms_samples\"\n\nalgorithm_list = [\"ParEGO\", \"SMSEGO\", \"MoeadEGO\", \"CauMO\"]\n# algorithm_list = [\"SMSEGO\"]\n# objectives = [\"execution_time\", \"file_size\", \"compilation_time\"]\nobjectives = [\"latency\", \"throughput\"]\nseed_list = [65535, 65536, 65537, 65538, 65539]\n\n\ndef load_and_prepare_data(file_path):\n    \"\"\"\n    Loads JSON data and prepares a DataFrame.\n    \"\"\"\n    # print(f\"Loading data from {file_path}\")\n    with open(file_path, \"r\") as f:\n        data = json.load(f)\n        if \"1\" in data:\n            data = data[\"1\"]\n\n    input_vectors = data[\"input_vector\"]\n    output_vectors = data[\"output_value\"]\n\n    df_input = pd.DataFrame(input_vectors)\n\n    df_output = pd.DataFrame(output_vectors)[objectives]\n    df_combined = pd.concat([df_input, df_output], axis=1)\n    print(f\"Loaded {len(df_combined)} data points\")\n\n    df_combined = df_combined.drop_duplicates(subset=df_input.columns.tolist())\n\n    for obj in objectives:\n        df_combined = df_combined[df_combined[obj] != 1e10]\n\n    print(f\"Loaded {len(df_combined)} data points, removed {len(df_input) - len(df_combined)} duplicates\")\n    print()\n    return df_combined\n\ndef load_data(workload, algorithm, seed):\n    if target == \"llvm\":\n        result_file = llvm_results / f\"llvm_{workload}\" / algorithm / f\"{seed}_KB.json\"\n    else:\n        result_file = gcc_results_path / f\"gcc_{workload}\" / algorithm / f\"{seed}_KB.json\"\n    df = load_and_prepare_data(result_file)\n    return df\n\ndef collect_all_data(workload):\n    all_data = []\n    for algorithm in algorithm_list:\n        for seed in seed_list:\n            df = load_data(workload, algorithm, seed)\n            all_data.append(df[objectives].values)\n    all_data = np.vstack(all_data)\n    global_mean = all_data.mean(axis=0)\n    global_std = all_data.std(axis=0)\n    return all_data, global_mean, global_std\n\n\ndef dynamic_plot(workload, algorithm, seed):\n    \"\"\"\n    Dynamically plot the three objectives for a given workload and algorithm for a specific seed.\n    \"\"\"\n    # Collect all data to understand the range\n    all_data, global_mean, global_std = collect_all_data(workload)\n    global_min = np.min(all_data, axis=0)\n    global_max = np.max(all_data, axis=0)\n   \n    # Load data for the specific seed\n    df = load_data(workload, algorithm, seed)\n    \n    # Normalize data (Min-Max normalization)\n    df_normalized = (df[objectives] - global_min) / (global_max - global_min)\n     \n    fig = plt.figure()\n    ax = fig.add_subplot(111, projection='3d')\n    ax.set_title(f\"Dynamic Plot for {workload} - {algorithm} - Seed {seed}\")\n    ax.set_xlabel(objectives[0])\n    ax.set_ylabel(objectives[1])\n    ax.set_zlabel(objectives[2])\n\n    # Initialize two scatter plots: one for all previous points, one for the new point\n    previous_points = ax.scatter([], [], [], c='b', marker='o')  # all previous points in blue\n    current_point = ax.scatter([], [], [], c='r', marker='o')  # current point in red\n    \n    def init():\n        previous_points._offsets3d = ([], [], [])\n        current_point._offsets3d = ([], [], [])\n        return previous_points, current_point\n\n    def update(frame):\n        # Add all previous points up to the current frame\n        previous_points._offsets3d = (df_normalized.iloc[:frame][objectives[0]].values,\n                                      df_normalized.iloc[:frame][objectives[1]].values,\n                                      df_normalized.iloc[:frame][objectives[2]].values)\n\n        # Add the current point (latest one in the sequence)\n        current_point._offsets3d = (df_normalized.iloc[frame:frame+1][objectives[0]].values,\n                                    df_normalized.iloc[frame:frame+1][objectives[1]].values,\n                                    df_normalized.iloc[frame:frame+1][objectives[2]].values)\n        return previous_points, current_point\n    \n    frames = len(df)\n    ani = FuncAnimation(fig, update, frames=frames, blit=False, repeat=False)\n    \n    # Save the plot to a file\n    gif_path = package_path / \"demo\" / \"comparison\" / \"gifs\" / f\"{target}_{algorithm}_{workload}_{seed}.gif\"\n    ani.save(gif_path, writer='imagemagick')\n    plt.close(fig)  # Close the plot to free memory\n\n\n# def dynamic_plot_html(workload, algorithm, seed):\n    \"\"\"\n    Dynamically plot the three objectives for a given workload and algorithm for a specific seed using Plotly.\n    \"\"\"\n    # Collect all data to understand the range\n    all_data, global_mean, global_std = collect_all_data(workload)\n    global_min = np.min(all_data, axis=0)\n    global_max = np.max(all_data, axis=0)\n   \n    # Load data for the specific seed\n    df = load_data(workload, algorithm, seed)\n    \n    # Normalize data (Min-Max normalization)\n    df_normalized = (df[objectives] - global_min) / (global_max - global_min)\n    df_normalized = df_normalized\n\n    pareto_front, pareto_front_index = find_pareto_front(df_normalized.values, return_index=True)\n    df_normalized = df_normalized.iloc[pareto_front_index]\n    \n    # Create traces for previous and current points\n    trace1 = go.Scatter3d(x=[], y=[], z=[], mode='markers', marker=dict(size=5, color='blue'))\n    trace2 = go.Scatter3d(x=[], y=[], z=[], mode='markers', marker=dict(size=5, color='red'))\n\n    # Combine traces into a data list\n    data = [trace1, trace2]\n\n    # Create the layout of the plot\n    layout = go.Layout(\n        title=f\"Dynamic Plot for {workload} - {algorithm} - Seed {seed}\",\n        scene=dict(\n            xaxis=dict(title=objectives[0], range=[0, 1]),\n            yaxis=dict(title=objectives[1], range=[0, 1]),\n            zaxis=dict(title=objectives[2], range=[0, 1])\n        )\n    )\n    \n    # Create the figure\n    fig = go.Figure(data=data, layout=layout)\n\n    # Create frames for the animation\n    frames = []\n    for t in range(len(df)):\n        frame = go.Frame(\n            data=[\n                go.Scatter3d(\n                    x=df_normalized.iloc[:t+1][objectives[0]].values,\n                    y=df_normalized.iloc[:t+1][objectives[1]].values,\n                    z=df_normalized.iloc[:t+1][objectives[2]].values,\n                    mode='markers',\n                    marker=dict(size=5, color='blue')\n                ),\n                go.Scatter3d(\n                    x=df_normalized.iloc[t:t+1][objectives[0]].values,\n                    y=df_normalized.iloc[t:t+1][objectives[1]].values,\n                    z=df_normalized.iloc[t:t+1][objectives[2]].values,\n                    mode='markers',\n                    marker=dict(size=5, color='red')\n                )\n            ]\n        )\n        frames.append(frame)\n\n    fig.frames = frames\n   \n    prev_frame_button = dict(\n        args=[None, {\"frame\": {\"duration\": 0, \"redraw\": False}, \"mode\": \"immediate\", \"transition\": {\"duration\": 0}}],\n        label='Previous',\n        method='animate'\n    )\n\n    next_frame_button = dict(\n        args=[None, {\"frame\": {\"duration\": 0, \"redraw\": False}, \"mode\": \"immediate\", \"transition\": {\"duration\": 0}}],\n        label='Next',\n        method='animate'\n    )\n\n    fig.update_layout(\n        updatemenus=[dict(\n            type='buttons',\n            showactive=False,\n            y=0,\n            x=1.05,\n            xanchor='right',\n            yanchor='top',\n            pad=dict(t=0, r=10),\n            buttons=[prev_frame_button, next_frame_button]\n        )]\n    )\n \n    # fig.update_layout(sliders=sliders)\n\n    # Save the plot to HTML file\n    html_path = package_path / \"demo\" / \"comparison\" / \"htmls\" / f\"dynamic_{target}_{algorithm}_{workload}_{seed}.html\"\n    fig.write_html(str(html_path))\n\n\ndef save_individual_frames(workload, algorithm, seed):\n    \"\"\"\n    Save each frame of the three objectives as a separate plot for a given workload, algorithm, and seed.\n    \"\"\"\n    # Load data for the specific seed\n    df = load_data(workload, algorithm, seed)\n\n    # Ensure the directory for saving frames exists\n    frames_dir = package_path / \"demo\" / \"comparison\" / \"frames\" / f\"{algorithm}_{workload}_{seed}\"\n    os.makedirs(frames_dir, exist_ok=True)\n    \n    for idx in range(len(df)):\n        fig = plt.figure()\n        ax = fig.add_subplot(111, projection='3d')\n        \n        # Add data points from the DataFrame row by row\n        x, y, z = df.iloc[idx][objectives[0]], df.iloc[idx][objectives[1]], df.iloc[idx][objectives[2]]\n        \n        # Plot and customize as needed\n        ax.scatter(x, y, z, color='r')\n        ax.set_title(f\"Frame {idx} for {workload} - {algorithm} - Seed {seed}\")\n        ax.set_xlabel(objectives[0])\n        ax.set_ylabel(objectives[1])\n        ax.set_zlabel(objectives[2])\n        \n        # Save the plot as a file\n        frame_file = frames_dir / f\"frame_{idx:04d}.png\"\n        plt.savefig(frame_file)\n        plt.close(fig)  # Close the plot to free memory\n        \n\ndef load_workloads():\n    file_path = package_path / \"demo\" / \"comparison\" / f\"features_by_workload_{target}.json\"\n    with open(file_path, \"r\") as f:\n        return json.load(f).keys()\n\n\n# def plot_pareto_front_html(workload):\n    # df = load_and_prepare_data(gcc_samples_path / f\"GCC_{workload}.json\")\n    df = load_and_prepare_data(llvm_samples_path / f\"LLVM_{workload}.json\")\n    df_normalized = (df - df.min()) / (df.max() - df.min())\n    _, pareto_indices = find_pareto_front(df_normalized[objectives].values, return_index=True)\n    \n    # Retrieve Pareto points\n    pareto_points = df_normalized.iloc[pareto_indices][objectives]\n    \n    # Create a 3D scatter plot using plotly\n    fig = go.Figure(data=[go.Scatter3d(\n        x=pareto_points[objectives[0]],\n        y=pareto_points[objectives[1]],\n        z=pareto_points[objectives[2]],\n        mode='markers',\n        marker=dict(\n            size=5,\n            color='blue',  # set color to blue\n            opacity=0.8\n        )\n    )])\n\n    # Update the layout\n    fig.update_layout(\n        title=f\"Pareto Front for {workload}\",\n        scene=dict(\n            xaxis_title=objectives[0],\n            yaxis_title=objectives[1],\n            zaxis_title=objectives[2]\n        )\n    )\n\n    # Define the path for HTML file\n    html_path = package_path / \"demo\" / \"comparison\" / \"htmls\"\n    # Ensure the directory exists\n    html_path.mkdir(parents=True, exist_ok=True)\n\n    # Save the plot as an HTML file\n    fig.write_html(str(html_path / f\"{target}_pareto_front_{workload}.html\"))\n\n\ndef plot_pareto_front(workload):\n    # df = load_and_prepare_data(gcc_samples_path / f\"GCC_{workload}.json\")\n    # df = load_and_prepare_data(llvm_samples_path / f\"LLVM_{workload}.json\")\n    df = load_data(workload, \"ParEGO\", 65535)\n    df_normalized = (df - df.min()) / (df.max() - df.min())\n    _, pareto_indices = find_pareto_front(df_normalized[objectives].values, return_index=True)\n    \n    # Retrieve Pareto points\n    points = df_normalized.iloc[pareto_indices][objectives]\n    \n    # Create a 3D scatter plot\n    fig = plt.figure()\n    ax = fig.add_subplot(111, projection='3d')\n    ax.set_title(f\"Pareto Front for {workload}\")\n    ax.set_xlabel(objectives[0])\n    ax.set_ylabel(objectives[1])\n    ax.set_zlabel(objectives[2])\n    \n    # # Scatter plot for Pareto front\n    # points = df_normalized[objectives]\n    \n    # Convert Series to NumPy array before plotting\n    x_values = points[objectives[0]].values\n    y_values = points[objectives[1]].values\n    z_values = points[objectives[2]].values\n\n    ax.scatter(x_values, y_values, z_values, c='b', marker='o')\n\n    # Save the plot as a file\n    file_path = package_path / \"demo\" / \"comparison\" / \"pngs\" / f\"{target}_pf_{workload}.png\"\n    plt.savefig(file_path)\n    plt.close(fig)  # Close the plot to free memory\n    \n    \ndef plot_all(workload, algorithm=\"\"):\n    # df = load_and_prepare_data(llvm_samples_path / f\"LLVM_{workload}.json\")\n    # df = load_and_prepare_data(gcc_samples_path / f\"GCC_{workload}.json\")\n    # df = load_data(workload, algorithm, 65535)\n    df = load_and_prepare_data(dbms_samples_path / f\"DBMS_{workload}.json\")\n    df_normalized = (df - df.min()) / (df.max() - df.min())\n    df_normalized = df\n    \n    # Create a 3D scatter plot\n    fig = plt.figure()\n    ax = fig.add_subplot(111, projection='3d')\n    ax.set_title(f\"All samples for {workload}\")\n    ax.set_xlabel(objectives[0])\n    ax.set_ylabel(objectives[1])\n    ax.set_zlabel(objectives[2])\n    \n    # Scatter plot for Pareto front\n    points = df_normalized[objectives]\n    \n    # Convert Series to NumPy array before plotting\n    x_values = points[objectives[0]].values\n    y_values = points[objectives[1]].values\n    z_values = points[objectives[2]].values\n\n    ax.scatter(x_values, y_values, z_values, c='b', marker='o')\n\n    # Save the plot as a file\n    file_path = package_path / \"demo\" / \"comparison\" / \"pngs\" / f\"{target}_{workload}.png\"\n    plt.savefig(file_path)\n    plt.close(fig)  # Close the plot to free memory\n    \n# 2D plot all\ndef plot_all_2d(workload, algorithm=\"\"):\n    # df = load_and_prepare_data(llvm_samples_path / f\"LLVM_{workload}.json\")\n    # df = load_and_prepare_data(gcc_samples_path / f\"GCC_{workload}.json\")\n    # df = load_data(workload, algorithm, 65535)\n    df = load_and_prepare_data(dbms_samples_path / f\"DBMS_{workload}.json\")\n    df_normalized = (df - df.min()) / (df.max() - df.min())\n    df_normalized = df\n    \n    # Create a 2D scatter plot\n    fig = plt.figure()\n    ax = fig.add_subplot(111)\n    ax.set_title(f\"All samples for {workload}\")\n    ax.set_xlabel(objectives[0])\n    ax.set_ylabel(objectives[1])\n    \n    # Scatter plot for Pareto front\n    points = df_normalized[objectives]\n    \n    # Convert Series to NumPy array before plotting\n    x_values = points[objectives[0]].values\n    y_values = points[objectives[1]].values\n\n    ax.scatter(x_values, y_values, c='b', marker='o')\n\n    # Save the plot as a file\n    file_path = package_path / \"demo\" / \"comparison\" / \"pngs\" / f\"{target}_{workload}.png\"\n    plt.savefig(file_path)\n    plt.close(fig)  # Close the plot to free memory\n\nif __name__ == \"__main__\":\n    # workloads = load_workloads()\n \n    # workloads = [\n    #     \"cbench-consumer-tiff2bw\",\n    #     \"cbench-security-rijndael\",\n        \n    #     \"cbench-security-pgp\", \n    #     \"polybench-cholesky\",\n    #     \"cbench-consumer-tiff2rgba\",\n    #     \"cbench-network-patricia\",\n    #     # \"cbench-automotive-susan-e\",\n    #     # \"polybench-symm\",\n    #     \"cbench-consumer-mad\",\n    #     \"polybench-lu\"\n    # ]\n    \n    # workloads = [\n    #     \"cbench-security-sha\",\n    #     \"cbench-telecom-adpcm-c\",\n    #     \"\"\n    # ]\n    \n    # LLVM\n    workloads_improved = [\n        \"cbench-telecom-gsm\",\n        \"cbench-automotive-qsort1\",\n        \"cbench-automotive-susan-e\",\n        \"cbench-consumer-tiff2rgba\",\n        \"cbench-network-patricia\",\n        \"cbench-consumer-tiff2bw\",\n        \"cbench-consumer-jpeg-d\",\n        \"cbench-telecom-adpcm-c\",\n        \"cbench-security-rijndael\",\n        \"cbench-security-sha\",\n    ]\n        \n        \n    workloads_mysql = [\n        \"sibench\",\n        \"smallbank\",\n        \"voter\",\n        \"tatp\",\n        \"tpcc\",\n        \"twitter\",\n    ]\n    seed = 65535  # Example seed\n    \n    # Plot sampling results\n    for workload in workloads_mysql:\n        # for algorithm in algorithm_list:\n        plot_all_2d(workload)\n        # plot_pareto_front(workload)\n    \n    # for algorithm in algorithm_list:\n    #     # dynamic_plot_html(\"cbench-consumer-tiff2bw\", algorithm, seed)\n    #     for workload in workloads:\n    #         dynamic_plot_html(workload, algorithm, seed)\n            # dynamic_plot(workload, algorithm, seed)\n        # save_individual_frames(workload, algorithm, objectives, seed)"
  },
  {
    "path": "demo/comparison/experiment_gcc.py",
    "content": "import sys\nfrom pathlib import Path\n\ncurrent_dir = Path(__file__).resolve().parent\npackage_dir = current_dir.parent.parent\nsys.path.insert(0, str(package_dir))\n\nimport argparse\nimport json\nimport os\n\nimport numpy as np\nfrom csstuning.compiler.compiler_benchmark import CompilerBenchmarkBase\n\nfrom transopt.benchmark import instantiate_problems\nfrom transopt.KnowledgeBase.kb_builder import construct_knowledgebase\nfrom transopt.KnowledgeBase.TransferDataHandler import OptTaskDataHandler\nfrom optimizer.construct_optimizer import get_optimizer\n\nos.environ[\"MKL_NUM_THREADS\"] = \"1\"\nos.environ[\"NUMEXPR_NUM_THREADS\"] = \"1\"\nos.environ[\"OMP_NUM_THREADS\"] = \"1\"\n\n\ndef execute_tasks(tasks, args):\n    kb = construct_knowledgebase(args)\n    testsuits = instantiate_problems(tasks, args.seed)\n    optimizer = get_optimizer(args)\n    data_handler = OptTaskDataHandler(kb, args)\n    optimizer.optimize(testsuits, data_handler)\n\n\ndef split_into_segments(lst, n):\n    lst = list(lst)\n    k, m = divmod(len(lst), n)\n    return [lst[i * k + min(i, m) : (i + 1) * k + min(i + 1, m)] for i in range(n)]\n\n\ndef get_workloads(workloads, split_index, total_splits=10):\n    segments = split_into_segments(workloads, total_splits)\n    if split_index >= len(segments):\n        raise IndexError(\"split index out of range\")\n\n    return segments[split_index]\n\n\ndef load_features():\n    file_path = package_dir / \"demo\" / \"comparison\" / \"features_by_workload_gcc_extra.json\"\n    with open(file_path, \"r\") as f:\n        return json.load(f)\n\n\ndef configure_experiment(workload, features, seed, optimizer_name, exp_path, budget=20, init_number=10):\n    exp_name = f\"gcc_{workload}\"\n    args = argparse.Namespace(\n        seed=seed,\n        optimizer=optimizer_name,\n        budget=budget,\n        init_number=init_number,\n        pop_size=init_number,\n        init_method=\"random\",\n        exp_path=exp_path,\n        exp_name=exp_name,\n        verbose=True,\n        normalize=\"norm\",\n        acquisition_func=\"LCB\",\n    )\n    tasks = {\n        \"GCC\": {\n            \"budget\": budget,\n            \"workloads\": [workload],\n            \"knobs\": features[workload][\"top\"],\n        },\n    }\n    return tasks, args\n\ndef main(optimizers = [], repeat=5, budget=500, init_number=21):\n    features = load_features()\n\n    parser = argparse.ArgumentParser(description=\"Run optimization experiments\")\n    parser.add_argument(\"--split_index\", type=int, default=0,\n                        help=\"Index for splitting the workload segments\")\n    args = parser.parse_args()\n\n    available_workloads = [\n        \"polybench-3mm\",\n        \"cbench-automotive-susan-c\",\n        \"cbench-consumer-tiff2dither\",\n        \"cbench-automotive-bitcount\",\n        \"polybench-2mm\",\n        \"polybench-adi\",\n        \"cbench-office-stringsearch2\",\n        \"polybench-fdtd-2d\",\n        \"polybench-atax\",\n        \"polybench-doitgen\",\n        \"polybench-durbin\",\n        \"polybench-fdtd-apml\",\n        \"polybench-gemver\",\n        \"polybench-gesummv\",      \n    ]\n    # available_workloads = features.keys()\n    \n    workloads = get_workloads(available_workloads, args.split_index)\n\n    exp_path = package_dir / \"experiment_results\"\n\n    for optimizer_name in optimizers:\n        for workload in workloads:\n            for i in range(repeat):\n                tasks, exp_args = configure_experiment(\n                    workload,\n                    features,\n                    65535 + i,\n                    optimizer_name,\n                    exp_path,\n                    budget,\n                    init_number,\n                )\n                execute_tasks(tasks, exp_args)\n\n\ndef main_debug(repeat=1, budget=20, init_number=10):\n    features = load_features()\n\n    parser = argparse.ArgumentParser(description=\"Run optimization experiments\")\n    parser.add_argument(\"--split_index\", type=int, default=9,\n                        help=\"Index for splitting the workload segments\")\n    args = parser.parse_args()\n\n    workloads = get_workloads(features.keys(), args.split_index)[:1]\n\n    workloads = [\"cbench-consumer-jpeg-d\"]\n    exp_path = package_dir / \"experiment_results\"\n\n    for optimizer_name in [\"MoeadEGO\"]:\n        for workload in workloads:\n            for i in range(repeat):\n                tasks, exp_args = configure_experiment(\n                    workload,\n                    features,\n                    65535 + i,\n                    optimizer_name,\n                    exp_path,\n                    budget,\n                    init_number,\n                )\n                execute_tasks(tasks, exp_args)\n\n\nif __name__ == \"__main__\":\n    debug = True\n    debug = False\n    if debug:\n        main_debug(repeat=5, budget=500, init_number=10)\n    else:\n        main([\"ParEGO\", \"SMSEGO\", \"MoeadEGO\", \"CauMO\"], repeat=5, budget=500, init_number=21)\n"
  },
  {
    "path": "demo/comparison/experiment_llvm.py",
    "content": "import sys\nfrom pathlib import Path\n\ncurrent_dir = Path(__file__).resolve().parent\npackage_dir = current_dir.parent.parent\nsys.path.insert(0, str(package_dir))\n\nimport argparse\nimport json\nimport os\n\nimport numpy as np\nfrom csstuning.compiler.compiler_benchmark import CompilerBenchmarkBase\n\nfrom transopt.benchmark import instantiate_problems\nfrom transopt.KnowledgeBase.kb_builder import construct_knowledgebase\nfrom transopt.KnowledgeBase.TransferDataHandler import OptTaskDataHandler\nfrom optimizer.construct_optimizer import get_optimizer\n\nos.environ[\"MKL_NUM_THREADS\"] = \"1\"\nos.environ[\"NUMEXPR_NUM_THREADS\"] = \"1\"\nos.environ[\"OMP_NUM_THREADS\"] = \"1\"\n\n\ndef execute_tasks(tasks, args):\n    kb = construct_knowledgebase(args)\n    testsuits = instantiate_problems(tasks, args.seed)\n    optimizer = get_optimizer(args)\n    data_handler = OptTaskDataHandler(kb, args)\n    optimizer.optimize(testsuits, data_handler)\n\n\ndef split_into_segments(lst, n):\n    lst = list(lst)\n    k, m = divmod(len(lst), n)\n    return [lst[i * k + min(i, m) : (i + 1) * k + min(i + 1, m)] for i in range(n)]\n\n\ndef get_workloads(workloads, split_index, total_splits=10):\n    segments = split_into_segments(workloads, total_splits)\n    if split_index >= len(segments):\n        raise IndexError(\"split index out of range\")\n\n    return segments[split_index]\n\n\ndef load_features(file_path):\n    with open(file_path, \"r\") as f:\n        return json.load(f)\n\n\ndef configure_experiment(workload, features, seed, optimizer_name, exp_path, budget=20, init_number=10):\n    exp_name = f\"llvm_{workload}\"\n    args = argparse.Namespace(\n        seed=seed,\n        optimizer=optimizer_name,\n        budget=budget,\n        init_number=init_number,\n        init_method=\"random\",\n        exp_path=exp_path,\n        exp_name=exp_name,\n        verbose=True,\n        normalize=\"norm\",\n        acquisition_func=\"LCB\",\n    )\n    tasks = {\n        \"LLVM\": {\n            \"budget\": budget,\n            \"workloads\": [workload],\n            \"knobs\": features[workload][\"top\"],\n        },\n    }\n    return tasks, args\n\ndef main(optimizers = [], repeat=5, budget=500, init_number=21):\n    features_file = package_dir / \"demo\" / \"comparison\" / \"features_by_workload_llvm.json\"\n    features = load_features(features_file)\n\n    parser = argparse.ArgumentParser(description=\"Run optimization experiments\")\n    parser.add_argument(\"--split_index\", type=int, default=0,\n                        help=\"Index for splitting the workload segments\")\n    args = parser.parse_args()\n\n    workloads = get_workloads(features.keys(), args.split_index)\n\n    exp_path = Path.cwd() / \"experiment_results\"\n\n    for optimizer_name in optimizers:\n        for workload in workloads:\n            for i in range(repeat):\n                tasks, exp_args = configure_experiment(\n                    workload,\n                    features,\n                    65535 + i,\n                    optimizer_name,\n                    exp_path,\n                    budget,\n                    init_number,\n                )\n                execute_tasks(tasks, exp_args)\n\n\ndef main_debug(repeat=1, budget=20, init_number=10):\n    features_file = package_dir / \"demo\" / \"comparison\" / \"features_by_workload_llvm.json\"\n    features = load_features(features_file)\n\n    parser = argparse.ArgumentParser(description=\"Run optimization experiments\")\n    parser.add_argument(\"--split_index\", type=int, default=0,\n                        help=\"Index for splitting the workload segments\")\n    args = parser.parse_args()\n\n    workloads = get_workloads(features.keys(), args.split_index)[:1]\n\n    exp_path = Path.cwd() / \"experiment_results\"\n\n    for optimizer_name in [\"MoeadEGO\"]:\n        for workload in workloads:\n            for i in range(repeat):\n                tasks, exp_args = configure_experiment(\n                    workload,\n                    features,\n                    65535 + i,\n                    optimizer_name,\n                    exp_path,\n                    budget,\n                    init_number,\n                )\n                execute_tasks(tasks, exp_args)\n\n\nif __name__ == \"__main__\":\n    # debug = True\n    debug = False\n    if debug:\n        main_debug(repeat=1, budget=20, init_number=11)\n    else:\n        main([\"ParEGO\", \"MoeadEGO\", \"SMSEGO\", \"CauMO\"], repeat=5, budget=500, init_number=21)\n"
  },
  {
    "path": "demo/comparison/features_by_workload_gcc.json",
    "content": "{\n    \"cbench-consumer-tiff2bw\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"inline-functions\",\n            \"align-loops\",\n            \"align-functions\",\n            \"gcse\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"inline-functions\",\n            \"align-loops\",\n            \"align-functions\",\n            \"gcse\",\n            \"tree-ch\",\n            \"tree-loop-vectorize\",\n            \"vect-cost-model\",\n            \"tree-vrp\",\n            \"tree-pre\",\n            \"schedule-insns2\",\n            \"tree-dominator-opts\",\n            \"inline-small-functions\",\n            \"expensive-optimizations\",\n            \"tree-ter\",\n            \"code-hoisting\",\n            \"ipa-cp\",\n            \"forward-propagate\"\n        ]\n    },\n    \"cbench-security-rijndael\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-loops\",\n            \"align-labels\",\n            \"align-functions\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-loops\",\n            \"align-labels\",\n            \"align-functions\",\n            \"expensive-optimizations\",\n            \"gcse\",\n            \"schedule-insns2\",\n            \"tree-ter\",\n            \"guess-branch-probability\",\n            \"tree-pre\",\n            \"code-hoisting\",\n            \"tree-vrp\",\n            \"tree-sra\",\n            \"dse\",\n            \"tree-dominator-opts\",\n            \"peel-loops\",\n            \"if-conversion\",\n            \"tree-fre\",\n            \"rerun-cse-after-loop\",\n            \"omit-frame-pointer\"\n        ]\n    },\n    \"cbench-security-pgp\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-loops\",\n            \"align-labels\",\n            \"align-functions\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-loops\",\n            \"align-labels\",\n            \"align-functions\",\n            \"inline-functions\",\n            \"inline-small-functions\",\n            \"gcse\",\n            \"schedule-insns2\",\n            \"tree-vrp\",\n            \"tree-dominator-opts\",\n            \"tree-ccp\",\n            \"guess-branch-probability\",\n            \"expensive-optimizations\",\n            \"tree-ch\",\n            \"peel-loops\",\n            \"tree-partial-pre\",\n            \"tree-loop-vectorize\",\n            \"code-hoisting\",\n            \"dse\",\n            \"caller-saves\"\n        ]\n    },\n    \"polybench-cholesky\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-loops\",\n            \"align-labels\",\n            \"align-functions\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-loops\",\n            \"align-labels\",\n            \"align-functions\",\n            \"peel-loops\",\n            \"tree-ch\",\n            \"guess-branch-probability\",\n            \"tree-loop-vectorize\",\n            \"reorder-blocks-algorithm\",\n            \"ipa-cp\",\n            \"inline-small-functions\",\n            \"unswitch-loops\",\n            \"math-errno\",\n            \"inline-functions-called-once\",\n            \"optimize-strlen\",\n            \"tree-vrp\",\n            \"partial-inlining\",\n            \"reorder-blocks-and-partition\",\n            \"ipa-icf-functions\",\n            \"associative-math\"\n        ]\n    },\n    \"cbench-telecom-crc32\": {\n        \"common\": [\n            \"align-jumps\",\n            \"inline-small-functions\",\n            \"align-labels\",\n            \"inline-functions\",\n            \"align-loops\",\n            \"align-functions\",\n            \"tree-ch\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"inline-small-functions\",\n            \"align-labels\",\n            \"inline-functions\",\n            \"align-loops\",\n            \"align-functions\",\n            \"tree-ch\",\n            \"guess-branch-probability\",\n            \"omit-frame-pointer\",\n            \"schedule-insns2\",\n            \"expensive-optimizations\",\n            \"tree-vrp\",\n            \"caller-saves\",\n            \"gcse\",\n            \"tree-dominator-opts\",\n            \"cx-limited-range \",\n            \"compare-elim\",\n            \"tree-pre\",\n            \"split-loops\",\n            \"reorder-functions\"\n        ]\n    },\n    \"polybench-fdtd-apml\": {\n        \"common\": [\n            \"align-jumps\",\n            \"tree-ccp\",\n            \"align-labels\",\n            \"align-loops\",\n            \"align-functions\",\n            \"tree-ch\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"tree-ccp\",\n            \"align-labels\",\n            \"align-loops\",\n            \"align-functions\",\n            \"tree-ch\",\n            \"unsafe-math-optimizations\",\n            \"tree-pre\",\n            \"tree-fre\",\n            \"gcse\",\n            \"guess-branch-probability\",\n            \"inline-functions-called-once\",\n            \"omit-frame-pointer\",\n            \"code-hoisting\",\n            \"tree-dominator-opts\",\n            \"tree-vrp\",\n            \"tree-loop-vectorize\",\n            \"gcse-after-reload\",\n            \"move-loop-invariants\",\n            \"hoist-adjacent-loads\"\n        ]\n    },\n    \"cbench-network-patricia\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-loops\",\n            \"align-functions\",\n            \"split-loops\",\n            \"vect-cost-model\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-loops\",\n            \"align-functions\",\n            \"split-loops\",\n            \"vect-cost-model\",\n            \"inline-functions\",\n            \"inline-small-functions\",\n            \"optimize-strlen\",\n            \"tree-vrp\",\n            \"gcse\",\n            \"schedule-insns2\",\n            \"tree-copy-prop\",\n            \"reorder-blocks\",\n            \"tree-dominator-opts\",\n            \"reorder-blocks-and-partition\",\n            \"tree-pta\",\n            \"tree-ch\",\n            \"if-conversion\"\n        ]\n    },\n    \"cbench-consumer-tiff2rgba\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-loops\",\n            \"align-functions\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-loops\",\n            \"align-functions\",\n            \"tree-ch\",\n            \"tree-loop-vectorize\",\n            \"vect-cost-model\",\n            \"tree-pre\",\n            \"schedule-insns2\",\n            \"tree-vrp\",\n            \"gcse\",\n            \"tree-dominator-opts\",\n            \"inline-small-functions\",\n            \"tree-ter\",\n            \"inline-functions\",\n            \"expensive-optimizations\",\n            \"tree-pta\",\n            \"omit-frame-pointer\",\n            \"code-hoisting\"\n        ]\n    },\n    \"polybench-symm\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"align-loops\",\n            \"align-functions\",\n            \"tree-ch\",\n            \"peel-loops\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"align-loops\",\n            \"align-functions\",\n            \"tree-ch\",\n            \"peel-loops\",\n            \"tree-dominator-opts\",\n            \"tree-vrp\",\n            \"schedule-insns2\",\n            \"gcse\",\n            \"guess-branch-probability\",\n            \"inline-functions-called-once\",\n            \"inline-functions\",\n            \"inline-small-functions\",\n            \"expensive-optimizations\",\n            \"vect-cost-model\",\n            \"tree-fre\",\n            \"ipa-cp\",\n            \"ssa-phiopt\",\n            \"tree-copy-prop\"\n        ]\n    },\n    \"cbench-automotive-susan-e\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-loops\",\n            \"align-functions\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-loops\",\n            \"align-functions\",\n            \"cprop-registers\",\n            \"tree-vrp\",\n            \"tree-ch\",\n            \"schedule-insns2\",\n            \"gcse\",\n            \"tree-ter\",\n            \"code-hoisting\",\n            \"math-errno\",\n            \"tree-pre\",\n            \"expensive-optimizations\",\n            \"move-loop-invariants\",\n            \"tree-dominator-opts\",\n            \"caller-saves\",\n            \"unswitch-loops\",\n            \"dse\"\n        ]\n    },\n    \"cbench-telecom-adpcm-d\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-loops\",\n            \"align-labels\",\n            \"align-functions\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-loops\",\n            \"align-labels\",\n            \"align-functions\",\n            \"if-conversion\",\n            \"ssa-phiopt\",\n            \"guess-branch-probability\",\n            \"dce\",\n            \"schedule-insns2\",\n            \"tree-switch-conversion\",\n            \"tree-builtin-call-dce\",\n            \"tree-dominator-opts\",\n            \"peel-loops\",\n            \"predictive-commoning\",\n            \"vect-cost-model\",\n            \"tree-loop-vectorize\",\n            \"shrink-wrap\",\n            \"code-hoisting\",\n            \"math-errno\",\n            \"ipa-reference\"\n        ]\n    },\n    \"polybench-ludcmp\": {\n        \"common\": [\n            \"align-jumps\",\n            \"tree-ccp\",\n            \"tree-loop-vectorize\",\n            \"inline-functions-called-once\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-loops\",\n            \"align-functions\",\n            \"tree-ch\",\n            \"peel-loops\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"tree-ccp\",\n            \"tree-loop-vectorize\",\n            \"inline-functions-called-once\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-loops\",\n            \"align-functions\",\n            \"tree-ch\",\n            \"peel-loops\",\n            \"tree-vrp\",\n            \"schedule-insns2\",\n            \"tree-dominator-opts\",\n            \"inline-small-functions\",\n            \"reorder-blocks-algorithm\",\n            \"ipa-cp\",\n            \"gcse\",\n            \"reorder-blocks-and-partition\",\n            \"tree-pre\",\n            \"tree-dce\"\n        ]\n    },\n    \"polybench-lu\": {\n        \"common\": [\n            \"align-jumps\",\n            \"tree-loop-vectorize\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-loops\",\n            \"align-functions\",\n            \"peel-loops\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"tree-loop-vectorize\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-loops\",\n            \"align-functions\",\n            \"peel-loops\",\n            \"tree-ch\",\n            \"tree-vrp\",\n            \"schedule-insns2\",\n            \"gcse\",\n            \"tree-fre\",\n            \"inline-small-functions\",\n            \"ipa-cp\",\n            \"inline-functions-called-once\",\n            \"tree-dominator-opts\",\n            \"reorder-blocks-algorithm\",\n            \"tree-pre\",\n            \"reorder-blocks\",\n            \"code-hoisting\"\n        ]\n    },\n    \"cbench-consumer-mad\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"tree-vrp\",\n            \"align-loops\",\n            \"tree-pre\",\n            \"align-functions\",\n            \"tree-pta\",\n            \"vect-cost-model\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"tree-vrp\",\n            \"align-loops\",\n            \"tree-pre\",\n            \"align-functions\",\n            \"tree-pta\",\n            \"vect-cost-model\",\n            \"guess-branch-probability\",\n            \"if-conversion\",\n            \"optimize-sibling-calls\",\n            \"tree-slsr\",\n            \"shrink-wrap\",\n            \"reorder-blocks-and-partition\",\n            \"crossjumping\",\n            \"version-loops-for-strides\",\n            \"ipa-icf\",\n            \"compare-elim\",\n            \"lra-remat\",\n            \"ipa-sra\"\n        ]\n    },\n    \"cbench-automotive-qsort1\": {\n        \"common\": [\n            \"align-jumps\",\n            \"tree-loop-vectorize\",\n            \"inline-small-functions\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-loops\",\n            \"align-functions\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"tree-loop-vectorize\",\n            \"inline-small-functions\",\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-loops\",\n            \"align-functions\",\n            \"tree-ch\",\n            \"tree-dominator-opts\",\n            \"peel-loops\",\n            \"schedule-insns2\",\n            \"tree-vrp\",\n            \"inline-functions\",\n            \"partial-inlining\",\n            \"gcse\",\n            \"ssa-phiopt\",\n            \"inline-functions-called-once\",\n            \"vect-cost-model\",\n            \"move-loop-invariants\",\n            \"tree-tail-merge\"\n        ]\n    },\n    \"polybench-bicg\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"align-loops\",\n            \"vect-cost-model\",\n            \"align-functions\",\n            \"peel-loops\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"align-loops\",\n            \"vect-cost-model\",\n            \"align-functions\",\n            \"peel-loops\",\n            \"inline-small-functions\",\n            \"ipa-cp\",\n            \"guess-branch-probability\",\n            \"tree-tail-merge\",\n            \"optimize-strlen\",\n            \"inline-functions-called-once\",\n            \"tree-ch\",\n            \"tree-vrp\",\n            \"tree-coalesce-vars\",\n            \"tree-loop-distribute-patterns\",\n            \"optimize-sibling-calls\",\n            \"forward-propagate\",\n            \"omit-frame-pointer\",\n            \"tree-ter\"\n        ]\n    },\n    \"cbench-security-sha\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"align-loops\",\n            \"align-functions\",\n            \"tree-ch\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"align-loops\",\n            \"align-functions\",\n            \"tree-ch\",\n            \"tree-loop-vectorize\",\n            \"tree-vrp\",\n            \"tree-dominator-opts\",\n            \"schedule-insns2\",\n            \"guess-branch-probability\",\n            \"ipa-ra\",\n            \"gcse\",\n            \"ipa-sra\",\n            \"tree-pre\",\n            \"predictive-commoning\",\n            \"expensive-optimizations\",\n            \"tree-slp-vectorize\",\n            \"reciprocal-math\",\n            \"vect-cost-model\",\n            \"inline-small-functions\"\n        ]\n    },\n    \"cbench-consumer-jpeg-d\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-loops\",\n            \"align-labels\",\n            \"align-functions\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-loops\",\n            \"align-labels\",\n            \"align-functions\",\n            \"math-errno\",\n            \"inline-small-functions\",\n            \"gcse-after-reload\",\n            \"guess-branch-probability\",\n            \"lra-remat\",\n            \"tree-slsr\",\n            \"thread-jumps\",\n            \"tree-sra\",\n            \"combine-stack-adjustments\",\n            \"forward-propagate\",\n            \"version-loops-for-strides\",\n            \"cx-limited-range \",\n            \"merge-constants\",\n            \"associative-math\",\n            \"tree-loop-vectorize\",\n            \"reorder-blocks\"\n        ]\n    },\n    \"cbench-telecom-adpcm-c\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"align-loops\",\n            \"align-functions\",\n            \"vect-cost-model\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"align-loops\",\n            \"align-functions\",\n            \"vect-cost-model\",\n            \"if-conversion2\",\n            \"ssa-phiopt\",\n            \"guess-branch-probability\",\n            \"if-conversion\",\n            \"move-loop-invariants\",\n            \"inline-small-functions\",\n            \"isolate-erroneous-paths-dereference\",\n            \"defer-pop\",\n            \"cprop-registers\",\n            \"omit-frame-pointer\",\n            \"ipa-cp\",\n            \"dce\",\n            \"signed-zeros\",\n            \"ipa-sra\",\n            \"tree-builtin-call-dce\"\n        ]\n    },\n    \"cbench-telecom-gsm\": {\n        \"common\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"align-loops\",\n            \"align-functions\",\n            \"peel-loops\"\n        ],\n        \"top\": [\n            \"align-jumps\",\n            \"align-labels\",\n            \"align-loops\",\n            \"align-functions\",\n            \"peel-loops\",\n            \"tree-loop-vectorize\",\n            \"predictive-commoning\",\n            \"tree-dominator-opts\",\n            \"tree-ch\",\n            \"tree-vrp\",\n            \"tree-pre\",\n            \"guess-branch-probability\",\n            \"ssa-phiopt\",\n            \"if-conversion\",\n            \"math-errno\",\n            \"optimize-strlen\",\n            \"unswitch-loops\",\n            \"inline-functions-called-once\",\n            \"caller-saves\",\n            \"merge-constants\"\n        ]\n    }\n}"
  },
  {
    "path": "demo/comparison/features_by_workload_gcc_extra.json",
    "content": "{\n    \"cbench-automotive-bitcount\": {\n        \"common\": [\n            \"align-labels\",\n            \"tree-ter\",\n            \"align-functions\",\n            \"align-loops\",\n            \"align-jumps\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"tree-ter\",\n            \"align-functions\",\n            \"align-loops\",\n            \"align-jumps\",\n            \"tree-ch\",\n            \"optimize-sibling-calls\",\n            \"guess-branch-probability\",\n            \"peephole2\",\n            \"reorder-blocks-algorithm\",\n            \"reorder-blocks\",\n            \"reorder-blocks-and-partition\",\n            \"gcse\",\n            \"tree-vrp\",\n            \"expensive-optimizations\",\n            \"tree-dce\",\n            \"schedule-insns2\",\n            \"tree-fre\",\n            \"split-loops\",\n            \"omit-frame-pointer\"\n        ]\n    },\n    \"cbench-automotive-susan-c\": {\n        \"common\": [\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-functions\",\n            \"align-loops\",\n            \"align-jumps\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-functions\",\n            \"align-loops\",\n            \"align-jumps\",\n            \"cprop-registers\",\n            \"tree-vrp\",\n            \"schedule-insns2\",\n            \"tree-ch\",\n            \"gcse\",\n            \"tree-dominator-opts\",\n            \"tree-pre\",\n            \"expensive-optimizations\",\n            \"reorder-blocks-algorithm\",\n            \"tree-ter\",\n            \"code-hoisting\",\n            \"tree-fre\",\n            \"predictive-commoning\",\n            \"reorder-blocks-and-partition\",\n            \"move-loop-invariants\"\n        ]\n    },\n    \"cbench-consumer-tiff2dither\": {\n        \"common\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"align-jumps\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"align-jumps\",\n            \"reorder-blocks-algorithm\",\n            \"vect-cost-model\",\n            \"inline-functions-called-once\",\n            \"hoist-adjacent-loads\",\n            \"guess-branch-probability\",\n            \"inline-functions\",\n            \"ipa-ra\",\n            \"reciprocal-math\",\n            \"tree-ccp\",\n            \"ipa-sra\",\n            \"optimize-strlen\",\n            \"split-paths\",\n            \"reorder-functions\",\n            \"caller-saves\",\n            \"tree-builtin-call-dce\",\n            \"tree-vrp\"\n        ]\n    },\n    \"cbench-office-stringsearch2\": {\n        \"common\": [\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"inline-functions\",\n            \"align-functions\",\n            \"align-loops\",\n            \"align-jumps\",\n            \"inline-small-functions\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"inline-functions\",\n            \"align-functions\",\n            \"align-loops\",\n            \"align-jumps\",\n            \"inline-small-functions\",\n            \"ipa-pure-const\",\n            \"tree-dominator-opts\",\n            \"tree-pre\",\n            \"schedule-insns2\",\n            \"tree-vrp\",\n            \"gcse\",\n            \"tree-ch\",\n            \"partial-inlining\",\n            \"expensive-optimizations\",\n            \"tree-ccp\",\n            \"tree-fre\",\n            \"dse\",\n            \"reorder-blocks-algorithm\"\n        ]\n    },\n    \"polybench-2mm\": {\n        \"common\": [\n            \"align-labels\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"ipa-cp\",\n            \"align-jumps\",\n            \"tree-ch\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"ipa-cp\",\n            \"align-jumps\",\n            \"tree-ch\",\n            \"tree-vrp\",\n            \"schedule-insns2\",\n            \"align-functions\",\n            \"tree-dominator-opts\",\n            \"predictive-commoning\",\n            \"inline-functions-called-once\",\n            \"gcse\",\n            \"tree-pre\",\n            \"guess-branch-probability\",\n            \"inline-small-functions\",\n            \"tree-fre\",\n            \"tree-partial-pre\",\n            \"tree-ccp\",\n            \"tree-loop-vectorize\"\n        ]\n    },\n    \"polybench-3mm\": {\n        \"common\": [\n            \"align-labels\",\n            \"tree-dominator-opts\",\n            \"align-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"tree-vrp\",\n            \"align-jumps\",\n            \"tree-ch\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"tree-dominator-opts\",\n            \"align-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"tree-vrp\",\n            \"align-jumps\",\n            \"tree-ch\",\n            \"ipa-cp\",\n            \"schedule-insns2\",\n            \"predictive-commoning\",\n            \"inline-functions-called-once\",\n            \"tree-pre\",\n            \"inline-small-functions\",\n            \"gcse\",\n            \"guess-branch-probability\",\n            \"tree-fre\",\n            \"tree-partial-pre\",\n            \"tree-ccp\",\n            \"tree-dce\"\n        ]\n    },\n    \"polybench-adi\": {\n        \"common\": [\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"tree-dominator-opts\",\n            \"inline-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"tree-loop-vectorize\",\n            \"ipa-cp\",\n            \"align-jumps\",\n            \"tree-ch\",\n            \"inline-small-functions\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"tree-dominator-opts\",\n            \"inline-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"tree-loop-vectorize\",\n            \"ipa-cp\",\n            \"align-jumps\",\n            \"tree-ch\",\n            \"inline-small-functions\",\n            \"ipa-cp-clone\",\n            \"tree-vrp\",\n            \"tree-fre\",\n            \"align-functions\",\n            \"tree-ccp\",\n            \"code-hoisting\",\n            \"schedule-insns2\",\n            \"tree-pre\",\n            \"gcse\"\n        ]\n    },\n    \"polybench-atax\": {\n        \"common\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"align-jumps\",\n            \"tree-ch\",\n            \"inline-small-functions\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"align-jumps\",\n            \"tree-ch\",\n            \"inline-small-functions\",\n            \"tree-loop-vectorize\",\n            \"tree-pre\",\n            \"schedule-insns2\",\n            \"tree-vrp\",\n            \"tree-dominator-opts\",\n            \"ipa-cp\",\n            \"gcse\",\n            \"predictive-commoning\",\n            \"inline-functions-called-once\",\n            \"guess-branch-probability\",\n            \"tree-partial-pre\",\n            \"optimize-strlen\",\n            \"tree-fre\"\n        ]\n    },\n    \"polybench-doitgen\": {\n        \"common\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"align-jumps\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"align-jumps\",\n            \"peel-loops\",\n            \"tree-pre\",\n            \"tree-dominator-opts\",\n            \"predictive-commoning\",\n            \"gcse\",\n            \"guess-branch-probability\",\n            \"tree-vrp\",\n            \"tree-fre\",\n            \"tree-ch\",\n            \"tree-partial-pre\",\n            \"vect-cost-model\",\n            \"code-hoisting\",\n            \"inline-functions-called-once\",\n            \"inline-small-functions\",\n            \"tree-builtin-call-dce\",\n            \"tree-dce\"\n        ]\n    },\n    \"polybench-durbin\": {\n        \"common\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"align-jumps\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"align-jumps\",\n            \"predictive-commoning\",\n            \"tree-vrp\",\n            \"tree-ch\",\n            \"tree-loop-vectorize\",\n            \"tree-pre\",\n            \"guess-branch-probability\",\n            \"ipa-cp\",\n            \"inline-small-functions\",\n            \"gcse\",\n            \"schedule-insns2\",\n            \"inline-functions-called-once\",\n            \"reciprocal-math\",\n            \"indirect-inlining\",\n            \"devirtualize\",\n            \"auto-inc-dec\"\n        ]\n    },\n    \"polybench-fdtd-2d\": {\n        \"common\": [\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"tree-loop-vectorize\",\n            \"align-jumps\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"guess-branch-probability\",\n            \"align-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"tree-loop-vectorize\",\n            \"align-jumps\",\n            \"ipa-cp\",\n            \"tree-ch\",\n            \"tree-vrp\",\n            \"inline-functions-called-once\",\n            \"tree-dominator-opts\",\n            \"inline-small-functions\",\n            \"schedule-insns2\",\n            \"tree-fre\",\n            \"gcse\",\n            \"code-hoisting\",\n            \"tree-pre\",\n            \"tree-ccp\",\n            \"ipa-icf-variables\"\n        ]\n    },\n    \"polybench-fdtd-apml\": {\n        \"common\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"align-jumps\",\n            \"tree-ch\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"align-jumps\",\n            \"tree-ch\",\n            \"unsafe-math-optimizations\",\n            \"tree-pre\",\n            \"tree-fre\",\n            \"gcse\",\n            \"guess-branch-probability\",\n            \"inline-functions-called-once\",\n            \"code-hoisting\",\n            \"tree-dominator-opts\",\n            \"omit-frame-pointer\",\n            \"tree-loop-vectorize\",\n            \"move-loop-invariants\",\n            \"peephole2\",\n            \"inline-small-functions\",\n            \"ipa-cp\",\n            \"store-merging\"\n        ]\n    },\n    \"polybench-gemver\": {\n        \"common\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"tree-loop-vectorize\",\n            \"align-jumps\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"tree-loop-vectorize\",\n            \"align-jumps\",\n            \"tree-pre\",\n            \"inline-small-functions\",\n            \"tree-vrp\",\n            \"tree-dominator-opts\",\n            \"ipa-cp\",\n            \"guess-branch-probability\",\n            \"tree-ch\",\n            \"predictive-commoning\",\n            \"inline-functions-called-once\",\n            \"gcse\",\n            \"tree-fre\",\n            \"dse\",\n            \"partial-inlining\",\n            \"combine-stack-adjustments\"\n        ]\n    },\n    \"polybench-gesummv\": {\n        \"common\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"align-jumps\"\n        ],\n        \"top\": [\n            \"align-labels\",\n            \"align-functions\",\n            \"align-loops\",\n            \"peel-loops\",\n            \"align-jumps\",\n            \"unsafe-math-optimizations\",\n            \"guess-branch-probability\",\n            \"inline-small-functions\",\n            \"schedule-insns2\",\n            \"tree-dominator-opts\",\n            \"tree-vrp\",\n            \"gcse\",\n            \"inline-functions-called-once\",\n            \"tree-ch\",\n            \"ipa-cp\",\n            \"vect-cost-model\",\n            \"dce\",\n            \"ipa-icf\",\n            \"gcse-after-reload\",\n            \"tree-ter\"\n        ]\n    }\n}"
  },
  {
    "path": "demo/comparison/features_by_workload_llvm.json",
    "content": "{\n    \"cbench-telecom-gsm\": {\n        \"common\": [\n            \"early-cse\",\n            \"gvn\",\n            \"instcombine\",\n            \"jump-threading\"\n        ],\n        \"top\": [\n            \"early-cse\",\n            \"gvn\",\n            \"instcombine\",\n            \"jump-threading\",\n            \"sroa\",\n            \"mem2reg\",\n            \"licm\",\n            \"inject-tli-mappings\",\n            \"early-cse-memssa\",\n            \"loop-unroll\",\n            \"loop-vectorize\",\n            \"transform-warning\",\n            \"libcalls-shrinkwrap\",\n            \"adce\",\n            \"indvars\",\n            \"loop-sink\",\n            \"callsite-splitting\",\n            \"globalopt\",\n            \"loop-rotate\",\n            \"speculative-execution\"\n        ]\n    },\n    \"cbench-automotive-qsort1\": {\n        \"common\": [\n            \"instcombine\",\n            \"block-freq\"\n        ],\n        \"top\": [\n            \"instcombine\",\n            \"block-freq\",\n            \"globalopt\",\n            \"ipsccp\",\n            \"gvn\",\n            \"licm\",\n            \"sroa\",\n            \"loop-rotate\",\n            \"mem2reg\",\n            \"indvars\",\n            \"loop-vectorize\",\n            \"function-attrs\",\n            \"loop-unroll\",\n            \"early-cse-memssa\",\n            \"sccp\",\n            \"lazy-block-freq\",\n            \"always-inline\",\n            \"strip-dead-prototypes\",\n            \"bdce\",\n            \"domtree\"\n        ]\n    },\n    \"cbench-automotive-susan-e\": {\n        \"common\": [\n            \"loop-rotate\",\n            \"gvn\",\n            \"early-cse-memssa\",\n            \"instcombine\",\n            \"loop-unroll\",\n            \"early-cse\",\n            \"sroa\",\n            \"licm\",\n            \"mem2reg\"\n        ],\n        \"top\": [\n            \"loop-rotate\",\n            \"gvn\",\n            \"early-cse-memssa\",\n            \"instcombine\",\n            \"loop-unroll\",\n            \"early-cse\",\n            \"sroa\",\n            \"licm\",\n            \"mem2reg\",\n            \"slp-vectorizer\",\n            \"simplifycfg\",\n            \"loop-vectorize\",\n            \"tbaa\",\n            \"tailcallelim\",\n            \"function-attrs\",\n            \"instsimplify\",\n            \"reassociate\",\n            \"always-inline\",\n            \"float2int\",\n            \"dse\"\n        ]\n    },\n    \"cbench-consumer-tiff2rgba\": {\n        \"common\": [\n            \"loop-rotate\",\n            \"gvn\",\n            \"early-cse-memssa\",\n            \"instcombine\",\n            \"sroa\",\n            \"licm\",\n            \"mem2reg\"\n        ],\n        \"top\": [\n            \"loop-rotate\",\n            \"gvn\",\n            \"early-cse-memssa\",\n            \"instcombine\",\n            \"sroa\",\n            \"licm\",\n            \"mem2reg\",\n            \"slp-vectorizer\",\n            \"loop-vectorize\",\n            \"loop-unroll\",\n            \"early-cse\",\n            \"indvars\",\n            \"dse\",\n            \"globalopt\",\n            \"jump-threading\",\n            \"loop-distribute\",\n            \"memoryssa\",\n            \"loop-accesses\",\n            \"prune-eh\",\n            \"aggressive-instcombine\"\n        ]\n    },\n    \"cbench-network-patricia\": {\n        \"common\": [\n            \"instcombine\"\n        ],\n        \"top\": [\n            \"instcombine\",\n            \"ipsccp\",\n            \"aggressive-instcombine\",\n            \"gvn\",\n            \"globalopt\",\n            \"loop-vectorize\",\n            \"licm\",\n            \"sroa\",\n            \"mem2reg\",\n            \"simplifycfg\",\n            \"loop-rotate\",\n            \"function-attrs\",\n            \"jump-threading\",\n            \"called-value-propagation\",\n            \"early-cse-memssa\",\n            \"dse\",\n            \"indvars\",\n            \"postdomtree\",\n            \"inject-tli-mappings\",\n            \"adce\"\n        ]\n    },\n    \"cbench-automotive-bitcount\": {\n        \"common\": [\n            \"loop-rotate\",\n            \"gvn\",\n            \"licm\"\n        ],\n        \"top\": [\n            \"loop-rotate\",\n            \"gvn\",\n            \"licm\",\n            \"globalopt\",\n            \"mem2reg\",\n            \"jump-threading\",\n            \"sroa\",\n            \"instcombine\",\n            \"simplifycfg\",\n            \"speculative-execution\",\n            \"indvars\",\n            \"loop-unroll\",\n            \"scoped-noalias-aa\",\n            \"early-cse-memssa\",\n            \"adce\",\n            \"ipsccp\",\n            \"lazy-branch-prob\",\n            \"slp-vectorizer\",\n            \"postdomtree\",\n            \"dse\"\n        ]\n    },\n    \"cbench-bzip2\": {\n        \"common\": [\n            \"loop-rotate\",\n            \"gvn\",\n            \"early-cse-memssa\",\n            \"instcombine\",\n            \"loop-unroll\",\n            \"sroa\",\n            \"licm\",\n            \"mem2reg\"\n        ],\n        \"top\": [\n            \"loop-rotate\",\n            \"gvn\",\n            \"early-cse-memssa\",\n            \"instcombine\",\n            \"loop-unroll\",\n            \"sroa\",\n            \"licm\",\n            \"mem2reg\",\n            \"slp-vectorizer\",\n            \"loop-vectorize\",\n            \"early-cse\",\n            \"indvars\",\n            \"jump-threading\",\n            \"dse\",\n            \"loop-accesses\",\n            \"loop-instsimplify\",\n            \"scoped-noalias-aa\",\n            \"lazy-block-freq\",\n            \"memcpyopt\",\n            \"always-inline\"\n        ]\n    },\n    \"cbench-consumer-tiff2bw\": {\n        \"common\": [\n            \"loop-rotate\",\n            \"gvn\",\n            \"instcombine\",\n            \"sroa\",\n            \"licm\",\n            \"jump-threading\",\n            \"mem2reg\"\n        ],\n        \"top\": [\n            \"loop-rotate\",\n            \"gvn\",\n            \"instcombine\",\n            \"sroa\",\n            \"licm\",\n            \"jump-threading\",\n            \"mem2reg\",\n            \"slp-vectorizer\",\n            \"loop-vectorize\",\n            \"early-cse-memssa\",\n            \"loop-unroll\",\n            \"alignment-from-assumptions\",\n            \"function-attrs\",\n            \"correlated-propagation\",\n            \"scoped-noalias-aa\",\n            \"openmp-opt-cgscc\",\n            \"postdomtree\",\n            \"prune-eh\",\n            \"lcssa\",\n            \"lazy-block-freq\"\n        ]\n    },\n    \"cbench-consumer-jpeg-d\": {\n        \"common\": [\n            \"loop-rotate\",\n            \"gvn\",\n            \"early-cse-memssa\",\n            \"instcombine\",\n            \"sroa\",\n            \"licm\",\n            \"mem2reg\"\n        ],\n        \"top\": [\n            \"loop-rotate\",\n            \"gvn\",\n            \"early-cse-memssa\",\n            \"instcombine\",\n            \"sroa\",\n            \"licm\",\n            \"mem2reg\",\n            \"loop-vectorize\",\n            \"loop-unroll\",\n            \"indvars\",\n            \"dse\",\n            \"function-attrs\",\n            \"transform-warning\",\n            \"slp-vectorizer\",\n            \"alignment-from-assumptions\",\n            \"called-value-propagation\",\n            \"callsite-splitting\",\n            \"loops\",\n            \"float2int\",\n            \"elim-avail-extern\"\n        ]\n    },\n    \"cbench-telecom-adpcm-c\": {\n        \"common\": [],\n        \"top\": [\n            \"globalopt\",\n            \"gvn\",\n            \"memcpyopt\",\n            \"mem2reg\",\n            \"strip-dead-prototypes\",\n            \"simplifycfg\",\n            \"licm\",\n            \"lazy-block-freq\",\n            \"loop-instsimplify\",\n            \"sroa\",\n            \"elim-avail-extern\",\n            \"instcombine\",\n            \"libcalls-shrinkwrap\",\n            \"reassociate\",\n            \"globaldce\",\n            \"loop-rotate\",\n            \"loop-vectorize\",\n            \"ipsccp\",\n            \"globals-aa\",\n            \"function-attrs\"\n        ]\n    },\n    \"cbench-telecom-adpcm-d\": {\n        \"common\": [\n            \"instcombine\",\n            \"callsite-splitting\"\n        ],\n        \"top\": [\n            \"instcombine\",\n            \"callsite-splitting\",\n            \"globalopt\",\n            \"mem2reg\",\n            \"gvn\",\n            \"simplifycfg\",\n            \"licm\",\n            \"sroa\",\n            \"loop-unroll\",\n            \"loop-rotate\",\n            \"loop-distribute\",\n            \"indvars\",\n            \"early-cse-memssa\",\n            \"ipsccp\",\n            \"phi-values\",\n            \"scoped-noalias-aa\",\n            \"alignment-from-assumptions\",\n            \"jump-threading\",\n            \"rpo-function-attrs\",\n            \"loop-simplifycfg\"\n        ]\n    },\n    \"cbench-office-stringsearch2\": {\n        \"common\": [\n            \"instcombine\",\n            \"libcalls-shrinkwrap\"\n        ],\n        \"top\": [\n            \"instcombine\",\n            \"libcalls-shrinkwrap\",\n            \"reassociate\",\n            \"licm\",\n            \"globalopt\",\n            \"ipsccp\",\n            \"function-attrs\",\n            \"inferattrs\",\n            \"early-cse\",\n            \"gvn\",\n            \"phi-values\",\n            \"simplifycfg\",\n            \"early-cse-memssa\",\n            \"loop-rotate\",\n            \"mem2reg\",\n            \"sroa\",\n            \"callsite-splitting\",\n            \"rpo-function-attrs\",\n            \"inject-tli-mappings\",\n            \"loop-load-elim\"\n        ]\n    },\n    \"cbench-security-rijndael\": {\n        \"common\": [\n            \"loop-rotate\",\n            \"globalopt\",\n            \"gvn\",\n            \"instcombine\",\n            \"branch-prob\",\n            \"slp-vectorizer\",\n            \"globaldce\",\n            \"aggressive-instcombine\",\n            \"simplifycfg\",\n            \"loop-unroll\",\n            \"called-value-propagation\",\n            \"deadargelim\",\n            \"sroa\",\n            \"vector-combine\",\n            \"memoryssa\",\n            \"loop-vectorize\"\n        ],\n        \"top\": [\n            \"loop-rotate\",\n            \"globalopt\",\n            \"gvn\",\n            \"instcombine\",\n            \"branch-prob\",\n            \"slp-vectorizer\",\n            \"globaldce\",\n            \"aggressive-instcombine\",\n            \"simplifycfg\",\n            \"loop-unroll\",\n            \"called-value-propagation\",\n            \"deadargelim\",\n            \"sroa\",\n            \"vector-combine\",\n            \"memoryssa\",\n            \"loop-vectorize\",\n            \"loop-simplifycfg\",\n            \"function-attrs\",\n            \"loop-distribute\",\n            \"licm\"\n        ]\n    },\n    \"cbench-security-sha\": {\n        \"common\": [\n            \"div-rem-pairs\",\n            \"correlated-propagation\"\n        ],\n        \"top\": [\n            \"div-rem-pairs\",\n            \"correlated-propagation\",\n            \"instcombine\",\n            \"globalopt\",\n            \"gvn\",\n            \"ipsccp\",\n            \"sroa\",\n            \"licm\",\n            \"mem2reg\",\n            \"loop-rotate\",\n            \"early-cse-memssa\",\n            \"function-attrs\",\n            \"strip-dead-prototypes\",\n            \"block-freq\",\n            \"indvars\",\n            \"loop-unroll\",\n            \"lcssa\",\n            \"loop-simplifycfg\",\n            \"loop-vectorize\",\n            \"branch-prob\"\n        ]\n    }\n}"
  },
  {
    "path": "demo/comparison/plot.py",
    "content": "import json\nimport sys\nfrom pathlib import Path\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom matplotlib.ticker import MultipleLocator\n\ncurrent_path = Path(__file__).resolve().parent\npackage_path = current_path.parent.parent\nsys.path.insert(0, str(package_path))\n\npngs_path = package_path / \"demo/comparison/pngs\"\n\ndef create_plots(data, file_name, format=\"pdf\"):\n    mpl.rcParams[\"font.family\"] = [\"serif\"]\n    mpl.rcParams[\"font.serif\"] = [\"Times New Roman\"]\n\n    # Plot settings\n    fig = plt.figure(figsize=(20, 8))\n\n    # Titles for subplots\n    titles = [\"ParEGO\", \"SMS-EGO\", \"MOEA/D-EGO\", \"Ours\"]\n\n    data[0], data[2] = data[2], data[0]\n    \n    global_min = np.min([np.min(d, axis=0) for d in data], axis=0)\n    global_max = np.max([np.max(d, axis=0) for d in data], axis=0)\n    \n    for i, d in enumerate(data):\n        ax = fig.add_subplot(1, 4, i + 1, projection='3d', proj_type='ortho')\n        ax.scatter(d[:, 0], d[:, 1], d[:, 2], facecolors='none', edgecolors='#304F9E', s=50, linewidths=1)\n\n        ax.text2D(0.85, 0.85, titles[i], transform=ax.transAxes, fontsize=14,\n            verticalalignment='center', horizontalalignment='center', \n            bbox=dict(facecolor='white', alpha=0.5, boxstyle=\"round,pad=0.3\"))\n            \n        ax.view_init(elev=20, azim=-45)\n        # Set the background of each axis to be transparent\n        ax.xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))\n        ax.yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))\n        ax.zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))\n        \n        ax.set_xlim(global_min[0], global_max[0])\n        ax.set_ylim(global_min[1], global_max[1])\n        ax.set_zlim(global_min[2], global_max[2])\n        \n        ax.tick_params(labelsize=14)\n    \n    # Save the plot as a file\n    \n    # plt.savefig(Path(pngs_path) / f\"{file_name}.png\", format=\"png\", bbox_inches=\"tight\")\n    plt.savefig(Path(pngs_path) / f\"{file_name}.{format}\", format=format, bbox_inches=\"tight\")\n    plt.close(fig)\n    \n\ndef load_data(workload, algorithm, seed):\n    if target == \"llvm\":\n        result_file = llvm_results / f\"llvm_{workload}\" / algorithm / f\"{seed}_KB.json\"\n    else:\n        result_file = gcc_results / f\"gcc_{workload}\" / algorithm / f\"{seed}_KB.json\"\n    df = load_and_prepare_data(result_file)\n    return df\n\ndef load_and_prepare_data(file_path):\n    \"\"\"\n    Loads JSON data and prepares a DataFrame.\n    \"\"\"\n    with open(file_path, \"r\") as f:\n        data = json.load(f)\n        if \"1\" in data:\n            data = data[\"1\"]\n\n    input_vectors = data[\"input_vector\"]\n    output_vectors = data[\"output_value\"]\n\n    df_input = pd.DataFrame(input_vectors)\n\n    df_output = pd.DataFrame(output_vectors)[objectives]\n    df_combined = pd.concat([df_input, df_output], axis=1)\n\n    df_combined = df_combined.drop_duplicates(subset=df_input.columns.tolist())\n\n    for obj in objectives:\n        df_combined = df_combined[df_combined[obj] != 1e10]\n\n    return df_combined\n\ndef get_data_ranges(data):\n    return {\n        'min': np.min([np.min(d, axis=0) for d in data], axis=0),\n        'max': np.max([np.max(d, axis=0) for d in data], axis=0)\n    }\n    \ndef rescale_data(data, original_range, target_range):\n    # 归一化到0-1\n    data_normalized = (data - original_range[0]) / (original_range[1] - original_range[0])\n    # 缩放到新范围\n    data_rescaled = data_normalized * (target_range[1] - target_range[0]) + target_range[0]\n    return data_rescaled\n\ndef map_data_to_mysql_ranges(data, gcc_llvm_range, mysql_range):\n    # 假设data是一个n*3的数组，每列分别是吞吐量、延迟、CPU使用率\n    data_mapped = np.copy(data)\n    for i, key in enumerate(['throughput', 'latency', 'cpu_usage']):\n        original_range = (np.min(gcc_llvm_range[key]), np.max(gcc_llvm_range[key]))\n        target_range = mysql_range[key]\n        data_mapped[:, i] = rescale_data(data[:, i], original_range, target_range)\n    return data_mapped\n\ndef invert_mapping(value, min_val, max_val):\n    # 这将反转映射，所以低值变高，高值变低\n    return max_val - (value - min_val)\n\n\nworkloads_improved = [\n    \"cbench-telecom-gsm\",\n    \"cbench-automotive-qsort1\",\n    \"cbench-automotive-susan-e\",\n    \"cbench-consumer-tiff2rgba\",\n    \"cbench-network-patricia\",\n    \"cbench-consumer-tiff2bw\",\n    \"cbench-consumer-jpeg-d\",\n    \"cbench-telecom-adpcm-c\",\n    \"cbench-security-rijndael\",\n    \"cbench-security-sha\",\n]\n              \nresults_path = package_path / \"experiment_results\"\ngcc_results = results_path / \"gcc_comparsion\"\nllvm_results = results_path / \"llvm_comparsion\"\n\n\nalgorithm_list = [\"ParEGO\", \"SMSEGO\", \"MoeadEGO\", \"CauMO\"]\nobjectives = [\"execution_time\", \"file_size\", \"compilation_time\"]\nmysql_objs = [\"throughput\", \"latency\", \"cpu_usage\"]\nseed_list = [65535, 65536, 65537, 65538, 65539]\n\nmysql_ranges = {\n    'voter': {'throughput_range': (0, 8000), 'latency_range': (0, 130000), 'cpu_usage_range': (0, 0.2)},\n    'sibench': {'throughput_range': (0, 17500), 'latency_range': (0, 300000), 'cpu_usage_range': (0, 0.4)},\n    'smallbank': {'throughput_range': (0, 10000), 'latency_range': (0, 500000), 'cpu_usage_range': (0, 0.6)},\n    'tatp': {'throughput_range': (0, 21000), 'latency_range': (0, 50000), 'cpu_usage_range': (0, 1.0)},\n    'twitter': {'throughput_range': (0, 13000), 'latency_range': (0, 60000), 'cpu_usage_range': (0, 1.2)},\n    'tpcc': {'throughput_range': (0, 1450), 'latency_range': (0, 500000), 'cpu_usage_range': (0, 2.0)}\n}\n\nout_format = \"pdf\"\ntarget = \"llvm\"\nworkloads_improved = [\"cbench-consumer-tiff2bw\"] \nseed_list = [65539]\n# out_format = \"png\"\n\nfor seed in seed_list:\n    try:\n        for workload in workloads_improved:\n            data_for_plotting = []\n            for algorithm in algorithm_list:\n                df = load_data(workload, algorithm, seed)\n                df_normalized = (df - df.min()) / (df.max() - df.min())\n                df_normalized = df\n                data_for_plotting.append(df[objectives].to_numpy())\n            \n            #get short_name of workload\n            workload = workload[7:]\n            gcc_llvm_ranges = get_data_ranges(data_for_plotting)\n            gcc_llvm_min, gcc_llvm_max = gcc_llvm_ranges['min'], gcc_llvm_ranges['max']\n            \n            for i in range(len(data_for_plotting)):\n                # 现在假设索引0是代表吞吐量的，我们需要反转它的映射\n                # 因为我们假定较低的GCC/LLVM值表示较好的性能，但对于MySQL，吞吐量需要较高的值表示较好的性能\n                data_for_plotting[i][:, 0] = np.array([\n                    invert_mapping(x, gcc_llvm_ranges['min'][0], gcc_llvm_ranges['max'][0])\n                    for x in data_for_plotting[i][:, 0]\n                ])\n            \n            for i in range(len(data_for_plotting)):\n                for j, obj in enumerate(mysql_objs):\n                    original_min = gcc_llvm_min[j]\n                    original_max = gcc_llvm_max[j]\n                    target_min = mysql_ranges['tatp'][f'{obj}_range'][0]\n                    target_max = mysql_ranges['tatp'][f'{obj}_range'][1]\n\n                    data_for_plotting[i][:, j] = rescale_data(\n                        data_for_plotting[i][:, j],\n                        (original_min, original_max),\n                        (target_min, target_max)\n                    )\n                    \n            create_plots(data_for_plotting, f\"{target}_{workload}_{seed}\", out_format)\n    except Exception as e:\n        print(f\"Error: {e}\")\n        continue\n    \n# # Usage example\n# np.random.seed(0)  # For reproducibility\n# # data = [np.random.rand(500, 3) * 1000 for _ in range(4)]\n# create_plots(df[objectives].to_numpy(), \"optimization_evaluation\")\n\n\n# # Create synthetic data for different algorithms for each workload\n# num_points = 500\n# workloads = [\"voter\", \"sibench\", \"smallbank\", \"tatp\", \"twitter\", \"tpcc\"]\n\n# def skewed_beta(a, b, min_value, max_value, n_points, skew_factor=5):\n#     \"\"\"\n#     Generate beta distributed data points with a skew towards one of the extremes.\n#     skew_factor > 1 will skew towards the max_value, otherwise towards min_value.\n#     \"\"\"\n#     data = np.random.beta(a, b, n_points)\n#     if skew_factor > 1:\n#         return data**skew_factor * (max_value - min_value) + min_value\n#     else:\n#         return (1 - data**skew_factor) * (max_value - min_value) + min_value\n\n# def generate_data_points(n_points, workload_ranges):\n#     \"\"\"\n#     Generate synthetic data for different algorithms for each workload with a tendency to cluster around (0,0,x)\n#     For 'our' method, the distribution is more varied to cover more PF.\n#     \"\"\"\n#     all_data = []\n#     for name, ranges in workload_ranges.items():\n#         data_for_workloads = []\n#         for i in range(4):  # Four algorithms including 'our' method\n#             # Heavily skew throughput and latency towards lower values\n#             throughput_data = skewed_beta(2, 2, ranges['throughput_range'][0], ranges['throughput_range'][1], n_points, skew_factor=0.3)\n#             latency_data = skewed_beta(2, 2, ranges['latency_range'][0], ranges['latency_range'][1], n_points, skew_factor=0.3)\n#             # Use a normal distribution for cpu usage but clip to range\n#             cpu_usage_data = np.random.normal(loc=ranges['cpu_usage_range'][1]/2, scale=ranges['cpu_usage_range'][1]/6, size=n_points)\n#             cpu_usage_data = np.clip(cpu_usage_data, ranges['cpu_usage_range'][0], ranges['cpu_usage_range'][1])\n\n#             if i == 3:  # 'our' method should cover more PF\n#                 # Add more variability to 'our' method\n#                 throughput_data = np.random.uniform(ranges['throughput_range'][0], ranges['throughput_range'][1], n_points)\n#                 latency_data = np.random.uniform(ranges['latency_range'][0], ranges['latency_range'][1], n_points)\n\n#             data_for_workloads.append(np.column_stack((throughput_data, latency_data, cpu_usage_data)))\n#         all_data.append(data_for_workloads)\n#     return all_data\n\n\n# n_points = 500\n# # workloads_data = {\n# #     'voter': generate_data_points(n_points, 0, 8000, 0, 130000, 0, 0.2),\n# #     'sibench': generate_data_points(n_points, 0, 17500, 0, 300000, 0, 0.4),\n# #     'smallbank': generate_data_points(n_points, 0, 10000, 0, 500000, 0, 0.6),\n# #     'tatp': generate_data_points(n_points, 0, 21000, 0, 50000, 0, 1.0),\n# #     'twitter': generate_data_points(n_points, 0, 13000, 0, 60000, 0, 1.2),\n# #     'tpcc': generate_data_points(n_points, 0, 1450, 0, 500000, 0, 2.0)\n# # }\n\nworkload_ranges = {\n    'voter': {'throughput_range': (0, 8000), 'latency_range': (0, 130000), 'cpu_usage_range': (0, 0.2)},\n    'sibench': {'throughput_range': (0, 17500), 'latency_range': (0, 300000), 'cpu_usage_range': (0, 0.4)},\n    'smallbank': {'throughput_range': (0, 10000), 'latency_range': (0, 500000), 'cpu_usage_range': (0, 0.6)},\n    'tatp': {'throughput_range': (0, 21000), 'latency_range': (0, 50000), 'cpu_usage_range': (0, 1.0)},\n    'twitter': {'throughput_range': (0, 13000), 'latency_range': (0, 60000), 'cpu_usage_range': (0, 1.2)},\n    'tpcc': {'throughput_range': (0, 1450), 'latency_range': (0, 500000), 'cpu_usage_range': (0, 2.0)}\n}\n\n# all_data = generate_data_points(500, workload_ranges)\n\n# # all_data = []\n# # for _ in range(4):\n# #     all_data.append(generate_data_points(n_points, 0, 8000, 0, 130000, 0, 0.2))\n\n# for i, workload in enumerate(workloads):\n#     create_plots(all_data[i], f\"mysql_{workload}\")"
  },
  {
    "path": "demo/comparison/plot_samples_dbms.py",
    "content": "import sys\nfrom pathlib import Path\n\ncurrent_path = Path(__file__).resolve().parent\npackage_path = current_path.parent.parent\nsys.path.insert(0, str(package_path))\n\nimport json\nimport os\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport plotly.graph_objects as go\nfrom matplotlib.animation import FuncAnimation\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom transopt.utils.pareto import calc_hypervolume, find_pareto_front\nfrom transopt.utils.plot import plot3D\n\nresults_path = package_path / \"experiment_results\"\ndbms_samples_path = results_path / \"dbms_samples\"\n\nobjectives = [\"throughput\", \"latency\"]\n\n\ndef load_and_prepare_data(file_path):\n    \"\"\"\n    Loads JSON data and prepares a DataFrame.\n    \"\"\"\n    # print(f\"Loading data from {file_path}\")\n    with open(file_path, \"r\") as f:\n        data = json.load(f)\n        if \"1\" in data:\n            data = data[\"1\"]\n\n    input_vectors = data[\"input_vector\"]\n    output_vectors = data[\"output_value\"]\n\n    df_input = pd.DataFrame(input_vectors)\n\n    df_output = pd.DataFrame(output_vectors)[objectives]\n    df_combined = pd.concat([df_input, df_output], axis=1)\n    # print(f\"Loaded {len(df_combined)} data points\")\n\n    df_combined = df_combined.drop_duplicates(subset=df_input.columns.tolist())\n\n    for obj in objectives:\n        if obj == \"latency\":\n            df_combined = df_combined[df_combined[obj] > 0]  # Discard latency less than 0\n        else:\n            df_combined = df_combined[df_combined[obj] != 1e10]  # Original condition\n\n    # print(f\"Loaded {len(df_combined)} data points, removed {len(df_input) - len(df_combined)} duplicates\")\n    # print()\n    return df_combined\n\ndef load_data(workload):\n    result_file = dbms_samples_path / f\"DBMS_{workload}.json\"\n    df = load_and_prepare_data(result_file)\n    return df\n\n\ndef plot_pareto_front(workload):\n    df = load_data(workload)\n    df_normalized = (df - df.min()) / (df.max() - df.min())\n    _, pareto_indices = find_pareto_front(df_normalized[objectives].values, return_index=True, obj_type=['max', 'min'])\n    \n    # Retrieve Pareto points\n    points = df_normalized.iloc[pareto_indices][objectives]\n    \n    plt.figure()\n    plt.title(f\"Pareto Front for {workload}\")\n    plt.xlabel(objectives[0])\n    plt.ylabel(objectives[1])\n    plt.scatter(points[objectives[0]], points[objectives[1]], c='b', marker='o')\n     \n    # Save the plot as a file\n    file_path = package_path / \"demo\" / \"comparison\" / \"pngs\" / f\"dbms_pf_{workload}.png\"\n    plt.savefig(file_path)\n    plt.close()  # Close the plot to free memory\n    \n    \ndef plot_all(workload):\n    df = load_data(workload)\n    df_normalized = (df - df.min()) / (df.max() - df.min())\n    \n    plt.figure()\n    plt.title(f\"All samples for {workload}\")\n    plt.xlabel(objectives[0])\n    plt.ylabel(objectives[1])\n    plt.scatter(df_normalized[objectives[0]], df_normalized[objectives[1]], c='b', marker='o')\n    \n    # Save the plot as a file\n    file_path = package_path / \"demo\" / \"comparison\" / \"pngs\" / f\"dbms_all_{workload}.png\"\n    plt.savefig(file_path)\n    plt.close()  # Close the plot to free memory\n    \nif __name__ == \"__main__\":\n    workloads_dbms = [\n        \"sibench\",\n        \"smallbank\",\n        \"tatp\",\n        \"tpcc\",\n        \"twitter\",\n        \"voter\"\n    ] \n        \n    for workload in workloads_dbms:\n        plot_pareto_front(workload)\n        plot_all(workload)"
  },
  {
    "path": "demo/comparison/start_server.py",
    "content": "import os\nimport sys\nfrom pathlib import Path\n\n# Define the current and package paths\ncurrent_path = Path(__file__).resolve().parent\npackage_path = current_path.parent.parent\nsys.path.insert(0, str(package_path))\n\n# Define the HTML directory\nhtml_dir = package_path / \"demo\" / \"comparison\" / \"htmls\"\n\n# Function to generate index.html\ndef generate_index_html():\n    with open(html_dir / 'index.html', 'w') as index_file:\n        index_file.write('<html><body>\\n')\n        index_file.write('<h1>List of HTML files</h1>\\n')\n        index_file.write('<ul>\\n')\n\n        # Loop through each html file in the directory\n        for html_file in html_dir.glob('*.html'):\n            link = html_file.name\n            # Exclude index.html from the list\n            if link != 'index.html':\n                index_file.write(f'<li><a href=\"{link}\">{link}</a></li>\\n')\n\n        index_file.write('</ul>\\n')\n        index_file.write('</body></html>')\n\n# Function to start a simple HTTP server\ndef start_http_server():\n    os.chdir(html_dir)  # Change working directory to html directory\n    os.system(\"python -m http.server\")  # Start the server\n\nif __name__ == \"__main__\":\n    generate_index_html()  # Generate the index.html file\n    start_http_server()  # Start the server"
  },
  {
    "path": "demo/correlation_analysis.py",
    "content": "import logging\nimport os\nimport argparse\n\nfrom pathlib import Path\nfrom csstuning.compiler.compiler_benchmark import CompilerBenchmarkBase\nfrom transopt.ResultAnalysis.CorrelationAnalysis import MutualInformation\nfrom transopt.ResultAnalysis.CorrelationAnalysis import correlation_analysis\ndef run_analysis(Exper_folder:Path, tasks, methods, seeds, args):\n    logger = logging.getLogger(__name__)\n    correlation_analysis(Exper_folder, tasks=tasks, methods=methods, seeds=seeds, args=args)\n\n\n\nif __name__ == '__main__':\n    samples_num = 5000\n    tasks = {\n        \"GCC\": {\"budget\": samples_num, \"workloads\": None},\n        \"LLVM\": {\"budget\": samples_num, \"workloads\": None},\n    }\n    Methods_list = {'ParEGO'}\n    Seeds = [0]\n\n    parser = argparse.ArgumentParser(description='Process some integers.')\n    parser.add_argument(\"-in\", \"--init_number\", type=int, default=0)\n    parser.add_argument(\"-p\", \"--exp_path\", type=str, default='../LFL_experiments')\n    parser.add_argument(\"-n\", \"--exp_name\", type=str, default='test')  # 实验名称，保存在experiments中\n    parser.add_argument(\"-c\", \"--comparision\", type=bool, default=True)\n    parser.add_argument(\"-a\", \"--track\", type=bool, default=True)\n    parser.add_argument(\"-r\", \"--report\", type=bool, default=False)\n    parser.add_argument(\"-lm\", \"--load_mode\", type=bool, default=True)  # 控制是否从头开始\n\n    args = parser.parse_args()\n    Exp_name = args.exp_name\n    Exp_folder = args.exp_path\n    Exper_folder = '{}/{}'.format(Exp_folder, Exp_name)\n    Exper_folder = Path(Exper_folder)\n    run_analysis(Exper_folder, tasks=tasks, methods=Methods_list, seeds = Seeds, args=args)\n\n"
  },
  {
    "path": "demo/experiment_lsh_validity.py",
    "content": "import random\nimport string\nimport time\nimport uuid\nimport pandas as pd\n\nfrom transopt.datamanager.manager import DataManager\nfrom transopt.datamanager.database import Database\nfrom transopt.utils.path import get_library_path\n\nbase_strings = {\n    \"finance\": [\n        \"interest_rate\",\n        \"loan_amount\",\n        \"credit_score\",\n        \"investment_return\",\n        \"market_risk\",\n    ],\n    \"health\": [\n        \"blood_pressure\",\n        \"heart_rate\",\n        \"cholesterol_level\",\n        \"blood_sugar\",\n        \"body_mass_index\",\n    ],\n    \"transportation\": [\n        \"traffic_flow\",\n        \"fuel_usage\",\n        \"travel_time\",\n        \"vehicle_capacity\",\n        \"route_efficiency\",\n    ],\n    \"energy\": [\n        \"power_consumption\",\n        \"emission_level\",\n        \"renewable_source\",\n        \"energy_cost\",\n        \"grid_stability\",\n    ],\n    \"education\": [\n        \"student_performance\",\n        \"teacher_ratio\",\n        \"course_availability\",\n        \"graduation_rate\",\n        \"facility_utilization\",\n    ],\n}\n\n\ndef generate_random_string(length):\n    letters = string.ascii_lowercase\n    return \"\".join(random.choice(letters) for i in range(length))\n\n\ndef generate_dataset_config():\n    domain = random.choice(list(base_strings.keys()))\n    num_variables = random.randint(3, 5)\n    num_objectives = random.randint(1, 2)\n\n    workload = random.randint(1, 5)\n    problem_name = f\"{domain}{generate_random_string(3)}\"\n    dataset_name = f\"{problem_name}_{workload}_{uuid.uuid4().hex[:8]}\"\n\n    variables = []\n    selected_base_strings = random.sample(base_strings[domain], k=num_variables)\n    for base in selected_base_strings:\n        random_suffix = generate_random_string(random.randint(1, 3))\n        variable_name = f\"{base}{random_suffix}\"\n        variables.append(\n            {\"name\": variable_name, \"type\": \"continuous\"}\n        )  # Assume all variables are float for simplicity\n\n    objectives = [\n        {\"name\": f\"obj_{i}_{generate_random_string(3)}\", \"type\": \"minimize\"}\n        for i in range(num_objectives)\n    ]\n    fidelities = []  # No fidelities defined in your setup, can be adjusted if needed\n\n    # Additional fields\n    additional_config = {\n        \"problem_name\": problem_name,\n        \"dim\": num_variables,\n        \"obj\": num_objectives,\n        \"fidelity\": generate_random_string(random.randint(3, 6)),\n        \"workloads\": workload,\n        \"budget_type\": random.choice([\"Num_FEs\", \"Hours\", \"Minutes\", \"Seconds\"]),\n        \"budget\": random.randint(1, 100),\n    }\n\n    return dataset_name, {\n        \"variables\": variables,\n        \"objectives\": objectives,\n        \"fidelities\": fidelities,\n        \"additional_config\": additional_config,\n    }\n\n\ndef create_experiment_datasets(dm, num_datasets):\n    for _ in range(num_datasets):\n        dataset_name, dataset_cfg = generate_dataset_config()\n        dm.create_dataset(dataset_name, dataset_cfg)\n\n\ndef get_shingles(text, ngram=5):\n    return set(text[i : i + ngram] for i in range(len(text) - ngram + 1))\n\n\ndef cal_jacard_similarity(cfg1, cfg2):\n    task_name1, variable_names1 = cfg1\n    task_name2, variable_names2 = cfg2\n\n    shingles1 = get_shingles(task_name1).union(get_shingles(variable_names1))\n    shingles2 = get_shingles(task_name2).union(get_shingles(variable_names2))\n\n    return len(shingles1.intersection(shingles2)) / len(shingles1.union(shingles2))\n\n\ndef validity_experiment(n_tables, num_replicates=3, jacard_lower_bound = 0.35):\n    # Clean up the database\n    db_path = get_library_path() / \"exp_database.db\"\n    if db_path.exists():\n        db_path.unlink()\n\n    db = Database(db_path)\n    dm = DataManager(db, num_hashes=100, char_ngram=5, num_bands=50)\n    setup_start = time.time()\n    create_experiment_datasets(dm, n_tables)\n    setup_end = time.time()\n    print(f\"Generated {n_tables} datasets in {setup_end - setup_start} seconds\")\n\n    exec_time_jacard = []\n    exec_time_lsh = []\n    for _ in range(num_replicates):\n        target_dataset_name, target_dataset_cfg = generate_dataset_config()\n        print(\n            f\"Searching for similar datasets to {target_dataset_name}\"\n        )\n        print(\"=====================================\")\n\n        task_name, var_names, num_var, num_obj = dm._construct_vector(\n            target_dataset_cfg\n        )\n\n        start_jacard = time.time()\n        similar_datasets_by_jacard = set()\n        all_datasets = dm.get_all_datasets()\n        for dataset in all_datasets:\n            dataset_info = dm.get_dataset_info(dataset)\n            task_name_tmp, var_names_tmp, num_var_tmp, num_obj_tmp = (\n                dm._construct_vector(dataset_info)\n            )\n            if num_var != num_var_tmp or num_obj != num_obj_tmp:\n                continue\n\n            similarity = cal_jacard_similarity(\n                (task_name, var_names), (task_name_tmp, var_names_tmp)\n            )\n\n            if similarity >= jacard_lower_bound:\n                similar_datasets_by_jacard.add(dataset)\n\n        end_jacard = time.time()\n        exec_time_jacard.append(end_jacard - start_jacard)\n        print(\n            f\"Found {len(similar_datasets_by_jacard)} similar datasets by jacard in {end_jacard - start_jacard} seconds\"\n        )\n\n        start_lsh = time.time()\n        similar_datasets = dm.search_similar_datasets(target_dataset_cfg)\n        similar_datasets_by_lsh = set()\n        for dataset in similar_datasets:\n            dataset_info = dm.get_dataset_info(dataset)\n            task_name_tmp, var_names_tmp, num_var_tmp, num_obj_tmp = (\n                dm._construct_vector(dataset_info)\n            )\n            similarity = cal_jacard_similarity(\n                (task_name, var_names), (task_name_tmp, var_names_tmp)\n            )\n\n            if similarity >= jacard_lower_bound:\n                similar_datasets_by_lsh.add(dataset)\n\n        end_lsh = time.time()\n        exec_time_lsh.append(end_lsh - start_lsh)\n        print(\n            f\"Found {len(similar_datasets_by_lsh)} similar datasets by lsh in {end_lsh - start_lsh} seconds\"\n        )\n        print()\n\n    dm.teardown()\n    return exec_time_jacard, exec_time_lsh\n\n\nif __name__ == \"__main__\":\n    num_replicates = 20\n    n_tables_list = [1000,2000, 3000,4000,5000,6000,7000,8000,10000]\n    results_jacard = {}\n    results_lsh = {}\n    # results = []\n    for n_tables in n_tables_list:\n        exec_time_jacard, exec_time_lsh = validity_experiment(n_tables, num_replicates)\n        # results.append(\n        #     {\n        #         \"n_tables\": n_tables,\n        #         \"exec_time_jacard\": exec_time_jacard,\n        #         \"exec_time_lsh\": exec_time_lsh,\n        #     }\n        # )\n        print(f\"n_tables: {n_tables} exec_time_jacard: {exec_time_jacard} exec_time_lsh {exec_time_lsh}\")\n\n        results_jacard[n_tables] = exec_time_jacard\n        results_lsh[n_tables] = exec_time_lsh\n        \n        jacard_df = pd.DataFrame(results_jacard)\n        lsh_df = pd.DataFrame(results_lsh)\n        # 保存为CSV文件\n        jacard_df.to_csv('jacard_exec_times.csv', index=False)\n        lsh_df.to_csv('lsh_exec_times.csv', index=False)\n        \n"
  },
  {
    "path": "demo/experiments.py",
    "content": "import logging\nimport os\nimport argparse\nimport sys\n\ncurrent_dir = os.path.dirname(os.path.abspath(__file__))\npackage_dir = os.path.dirname(current_dir)\nsys.path.insert(0, package_dir)\n\nfrom transopt.Benchmark import construct_test_suits\nfrom optimizer.construct_optimizer import get_optimizer\nfrom transopt.KnowledgeBase.kb_builder import construct_knowledgebase\nfrom transopt.KnowledgeBase.TaskDataHandler import OptTaskDataHandler\n\n\nos.environ[\"MKL_NUM_THREADS\"] = \"1\"\nos.environ[\"NUMEXPR_NUM_THREADS\"] = \"1\"\nos.environ[\"OMP_NUM_THREADS\"] = \"1\"\n\n\ndef run_experiments(tasks, args):\n    logger = logging.getLogger(__name__)\n    kb = construct_knowledgebase(args)\n    testsuits = construct_test_suits(tasks, args.seed)\n    optimizer = get_optimizer(args)\n    data_handler = OptTaskDataHandler(kb, args)\n    optimizer.optimize(testsuits, data_handler)\n\n\nif __name__ == \"__main__\":\n    tasks = {\n        # 'DBMS':{'budget': 11, 'time_stamp': 3},\n        # 'GCC' : {'budget': 11, 'time_stamp': 3},\n        # 'LLVM' : {'budget': 11, 'time_stamp': 3},\n        'Ackley': {'budget': 11, 'workloads': [1,2,3], 'params':{'input_dim':1}},\n        # 'MPB': {'budget': 110, 'time_stamp': 3},\n        # 'Griewank': {'budget': 11, 'time_stamp': 3,  'params':{'input_dim':2}},\n        # \"AckleySphere\": {\"budget\": 1000, \"workloads\":[1,2,3], \"params\": {\"input_dim\": 2}},\n        # 'Lunar': {'budget': 110, 'time_stamp': 3},\n        # 'XGB': {'budget': 110, 'time_stamp': 3},\n    }\n\n    parser = argparse.ArgumentParser(description=\"Process some integers.\")\n    parser.add_argument(\"-im\", \"--init_method\", type=str, default=\"random\")\n    parser.add_argument(\"-in\", \"--init_number\", type=int, default=7)\n    parser.add_argument(\n        \"-p\", \"--exp_path\", type=str, default=f\"{package_dir}/../LFL_experiments\"\n    )\n    parser.add_argument(\n        \"-n\", \"--exp_name\", type=str, default=\"test\"\n    )  # 实验名称，保存在experiments中\n    parser.add_argument(\"-s\", \"--seed\", type=int, default=0)  # 设置随机种子，与迭代次数相关\n    parser.add_argument(\n        \"-m\", \"--optimizer\", type=str, default=\"MTBO\"\n    )  # 设置method:WS,MT,INC\n    parser.add_argument(\"-v\", \"--verbose\", type=bool, default=True)\n    parser.add_argument(\"-norm\", \"--normalize\", type=str, default=\"norm\")\n    parser.add_argument(\"-sm\", \"--save_mode\", type=int, default=1)  # 控制是否保存模型\n    parser.add_argument(\"-lm\", \"--load_mode\", type=bool, default=False)  # 控制是否从头开始\n    parser.add_argument(\n        \"-ac\", \"--acquisition_func\", type=str, default=\"LCB\"\n    )  # 控制BO的acquisition function\n    args = parser.parse_args()\n\n    run_experiments(tasks, args)\n"
  },
  {
    "path": "demo/importances/cal_relationship.py",
    "content": "import sys\nfrom pathlib import Path\n\ncurrent_path = Path(__file__).resolve().parent\npackage_path = current_path.parent.parent\nsys.path.insert(0, str(package_path))\n\nimport json\nfrom pathlib import Path\n\nimport cmasher as cmr\nimport dcor\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as mcolors\nimport numpy as np\nimport pandas as pd\n\ntarget = \"gcc\"\nresults_path = package_path / \"experiment_results\"\ngcc_comparsion_path = results_path / \"gcc_archive_new\"\ngcc_samples_path = results_path / \"gcc_samples\"\nllvm_comparsion_path = results_path / \"llvm_archive\"\nllvm_samples_path = results_path / \"llvm_samples\"\n\npngs_path = package_path / \"demo/importances/pngs\"\n\nmpl.rcParams['font.family'] = ['serif']\nmpl.rcParams['font.serif'] = ['Times New Roman']\n\ndef load_and_prepare_data(file_path, objectives):\n    \"\"\"\n    Loads JSON data and prepares a DataFrame.\n    \"\"\"\n    with open(file_path, \"r\") as f:\n        data = json.load(f)\n\n    input_vectors = data[\"input_vector\"]\n    output_vectors = data[\"output_value\"]\n\n    df_input = pd.DataFrame(input_vectors)\n\n    df_output = pd.DataFrame(output_vectors)[objectives]\n    df_combined = pd.concat([df_input, df_output], axis=1)\n    # print(f\"Loaded {len(df_combined)} data points\")\n\n    df_combined = df_combined.drop_duplicates(subset=df_input.columns.tolist())\n    # print(f\"Removed {len(df_combined) - len(df_input)} duplicates\")\n\n    for obj in objectives:\n        df_combined = df_combined[df_combined[obj] != 1e10]\n    print(f\"Loaded {len(df_combined)} data points after removing extreme values\")\n    return df_combined\n\n\ndef cal_dcor(df, objectives):\n    \"\"\"\n    Calculate the distance correlation for each pair of objectives using the dcor library.\n    \"\"\"\n    dcor_results = {}\n    for i in range(len(objectives)):\n        for j in range(i + 1, len(objectives)):\n            obj1, obj2 = objectives[i], objectives[j]\n            dcor_value = dcor.distance_correlation(df[obj1], df[obj2])\n            dcor_results[f\"{obj1}-{obj2}\"] = dcor_value\n    return dcor_results\n\n\ndef cal_spearman_corr(df, objectives):\n    \"\"\"\n    Calculate the Spearman correlation for each pair of objectives.\n    \"\"\"\n\n    corr_matrix = df[objectives].corr(method=\"spearman\")\n\n    spearman_results = {}\n    for i in range(len(objectives)):\n        for j in range(i + 1, len(objectives)):\n            obj1, obj2 = objectives[i], objectives[j]\n            corr_value = corr_matrix.at[obj1, obj2]\n            spearman_results[f\"{obj1}-{obj2}\"] = corr_value\n\n    return spearman_results\n\n\ndef cal_pearson_corr(df, objectives):\n    \"\"\"\n    Calculate the Pearson correlation matrix for the given objectives and extract\n    pairwise correlations from it.\n    \"\"\"\n    corr_matrix = df[objectives].corr(method=\"pearson\")\n\n    pearson_results = {}\n    for i in range(len(objectives)):\n        for j in range(i + 1, len(objectives)):\n            obj1, obj2 = objectives[i], objectives[j]\n            corr_value = corr_matrix.at[obj1, obj2]\n            pearson_results[f\"{obj1}-{obj2}\"] = corr_value\n\n    return pearson_results\n\n\ndef generate_grid_plot(dcor_values_dict):\n    workloads = list(dcor_values_dict.keys())\n    objective_pairs = list(dcor_values_dict[workloads[0]].keys())\n\n    dcor_matrix = np.zeros((len(workloads), len(objective_pairs)))\n\n    for i, workload in enumerate(workloads):\n        for j, pair in enumerate(objective_pairs):\n            dcor_matrix[i, j] = dcor_values_dict[workload].get(pair, 0)\n\n    plt.figure(figsize=(12, 10))  # Increase the height of the heatmap\n\n    color_sequence = [\"#edf8fb\", \"#ccece6\", \"#99d8c9\", \"#66c2a4\", \"#2ca25f\", \"#006d2c\"]\n\n    cmap = mcolors.LinearSegmentedColormap.from_list(\"mycmap\", color_sequence)\n    \n    plt.imshow(dcor_matrix, cmap=cmr.fusion_r, interpolation=\"nearest\")\n    colorbar =plt.colorbar(shrink=0.57)  # Reduce the size of the colorbar\n    \n    # set font size of colorbar\n    colorbar.ax.tick_params(labelsize=18)\n\n    objective_pairs_short = ['ET-CS', 'ET-CT', 'CS-CT']\n\n    plt.yticks(range(len(workloads)), workloads, fontsize=18)  # Adjust labels as needed\n    plt.xticks(range(len(objective_pairs)), ['ET-CS', 'ET-CT', 'CS-CT'], rotation=45, fontsize=18)  # Rotation for better label visibility\n    \n    # plt.yticks(range(len(objective_pairs)), objective_pairs_short, fontsize=18)\n    # plt.xticks(range(len(workloads)), ['1', '2', '3', '4', '5'], fontsize=18)\n\n    plt.savefig(pngs_path / f\"heatmap.pdf\", format=\"pdf\", bbox_inches=\"tight\")\n\n\nif __name__ == \"__main__\":\n    gcc_workloads = [\n        \"cbench-consumer-tiff2rgba\",\n        \"cbench-security-rijndael\",\n        \"cbench-security-pgp\",\n        \"cbench-automotive-qsort1\",\n        \"cbench-automotive-susan-e\",\n        \"cbench-consumer-jpeg-d\",\n        \"cbench-security-sha\",\n        \"cbench-telecom-adpcm-c\",\n        \"cbench-telecom-adpcm-d\",\n        \"cbench-telecom-gsm\",\n        \"cbench-telecom-crc32\",\n        \"cbench-consumer-tiff2bw\",\n        \"cbench-consumer-mad\",\n        \"cbench-network-patricia\",\n    ]\n\n    objectives = [\"execution_time\", \"file_size\", \"compilation_time\"]\n\n    # dcor_values_dict = {}\n    # spearman_corr_dict = {}\n    # pearson_corr_dict = {}\n    # for workload in gcc_workloads:\n    #     file_path = gcc_samples_path / f\"GCC_{workload}.json\"\n    #     df = load_and_prepare_data(file_path, objectives)\n    #     dcor_values = cal_dcor(df, objectives)\n    #     spearman_corr = cal_spearman_corr(df, objectives)\n    #     pearson_corr = cal_pearson_corr(df, objectives)\n    #     print(f\"dCor values for {workload}: {dcor_values}\")\n    #     print(f\"Spearman correlation for {workload}: {spearman_corr}\")\n\n    #     dcor_values_dict[workload] = dcor_values\n    #     spearman_corr_dict[workload] = spearman_corr\n    #     pearson_corr_dict[workload] = pearson_corr\n\n    # with open(pngs_path / \"dcor_values_dict.json\", \"w\") as f:\n    #     json.dump(dcor_values_dict, f)\n\n    # with open(pngs_path / \"spearman_corr_dict.json\", \"w\") as f:\n    #     json.dump(spearman_corr_dict, f)\n\n    # with open(pngs_path / \"pearson_corr_dict.json\", \"w\") as f:\n    #     json.dump(pearson_corr_dict, f)\n\n    # with open(pngs_path / \"dcor_values_dict.json\", \"r\") as f:\n    #     dcor_values_dict = json.load(f)\n\n    # with open(pngs_path / \"spearman_corr_dict.json\", \"r\") as f:\n    #     spearman_corr_dict = json.load(f)\n\n    # with open(pngs_path / \"pearson_corr_dict.json\", \"r\") as f:\n    #     pearson_corr_dict = json.load(f)\n    \n    dcor_values_dict = {\n        \"telecom-adpcm-c\": {\n            \"execution_time-file_size\": 0.5096407431062894,\n            \"execution_time-compilation_time\": 0.02156206023915185,\n            \"file_size-compilation_time\": 0.028167304817522342,\n        },\n        \"automotive-qsort1\": {\n            \"execution_time-file_size\": 0.24458686101114566,\n            \"execution_time-compilation_time\": 0.4484640731112793,\n            \"file_size-compilation_time\": 0.1319462835609861,\n        },\n        \"network-patricia\": {\n            \"execution_time-file_size\": 0.3136478783871287,\n            \"execution_time-compilation_time\": 0.11344940640932157,\n            \"file_size-compilation_time\": 0.23628956882620056,\n        },\n        \"telecom-gsm\": {\n            \"execution_time-file_size\": 0.3199972317712137,\n            \"execution_time-compilation_time\": 0.19506712567511303,\n            \"file_size-compilation_time\": 0.08086715789520826,\n        },\n        \"consumer-tiff2rgba\": {\n            \"execution_time-file_size\": 0.19036475515437773,\n            \"execution_time-compilation_time\": 0.18802272660380803,\n            \"file_size-compilation_time\": 0.09256748900522595,\n        },\n    }\n\n    generate_grid_plot(dcor_values_dict)\n"
  },
  {
    "path": "demo/importances/draw_obj_heatmap.py",
    "content": "import pandas as pd\nimport numpy as np\nimport matplotlib.colors as mcolors\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport dcor\nimport cmasher as cmr\nimport json\nimport sys\nfrom pathlib import Path\n\ncurrent_path = Path(__file__).resolve().parent\npackage_path = current_path.parent.parent\nsys.path.insert(0, str(package_path))\n\n\npngs_path = package_path / \"demo/importances/pngs\"\n\nmpl.rcParams[\"font.family\"] = [\"serif\"]\nmpl.rcParams[\"font.serif\"] = [\"Times New Roman\"]\n\ndef generate_grid_plot_combine(dcor_values_dicts):\n    # 创建一个图和三个子图（对于三个数据集）\n    fig, axs = plt.subplots(1, 3, figsize=(25, 10), constrained_layout=True)\n    \n    for ax, dcor_values_dict in zip(axs, dcor_values_dicts,):\n        workloads = list(dcor_values_dict.keys())\n        objective_pairs = list(dcor_values_dict[workloads[0]].keys())\n    \n        dcor_matrix = np.zeros((len(workloads), len(objective_pairs)))\n    \n        for i, workload in enumerate(workloads):\n            for j, pair in enumerate(objective_pairs):\n                dcor_matrix[i, j] = dcor_values_dict[workload].get(pair, 0)\n    \n        im = ax.imshow(dcor_matrix, cmap=cmr.prinsenvlag_r, interpolation=\"nearest\", vmin=-0.6, vmax=0.6)\n    \n        ax.set_yticks(range(len(workloads)))\n        ax.set_yticklabels(workloads, fontsize=36)\n        ax.set_xticks(range(len(objective_pairs)))\n        ax.set_xticklabels(objective_pairs, rotation=0, fontsize=36)\n    \n    cbar = fig.colorbar(im, ax=axs, shrink=1, location='right')\n    cbar.ax.tick_params(labelsize=36)  # 设置 color bar 字体大小\n    plt.savefig(pngs_path / \"combined_heatmap.pdf\", format=\"pdf\", bbox_inches=\"tight\")\n    \n\n\ndef generate_grid_plot(dcor_values_dict, file_name):\n    workloads = list(dcor_values_dict.keys())\n    objective_pairs = list(dcor_values_dict[workloads[0]].keys())\n\n    dcor_matrix = np.zeros((len(workloads), len(objective_pairs)))\n\n    for i, workload in enumerate(workloads):\n        for j, pair in enumerate(objective_pairs):\n            dcor_matrix[i, j] = dcor_values_dict[workload].get(pair, 0)\n\n    plt.figure(figsize=(12, 10))\n\n    plt.imshow(dcor_matrix, cmap=cmr.prinsenvlag_r, interpolation=\"nearest\", vmin=-0.6, vmax=0.6)\n    colorbar = plt.colorbar(shrink=1)\n    colorbar.ax.tick_params(labelsize=18)\n\n    plt.yticks(range(len(workloads)), workloads, fontsize=18)\n    plt.xticks(range(len(objective_pairs)),\n               objective_pairs, rotation=0, fontsize=18)\n\n    plt.savefig(pngs_path / f\"{file_name}_heatmap.pdf\", format=\"pdf\", bbox_inches=\"tight\")\n\n\nif __name__ == \"__main__\":\n    gcc_dcor_values_dict = {\n        \"adpcm-c\": {\"ET-CS\": 0.5096407431062894, \"ET-CT\": 0.02156206023915185, \"CS-CT\": 0.028167304817522342},\n        \"qsort1\": {\"ET-CS\": 0.24458686101114566, \"ET-CT\": 0.4484640731112793, \"CS-CT\": 0.1319462835609861},\n        \"patricia\": {\"ET-CS\": 0.3136478783871287, \"ET-CT\": 0.11344940640932157, \"CS-CT\": 0.23628956882620056},\n        \"gsm\": {\"ET-CS\": 0.3199972317712137, \"ET-CT\": 0.19506712567511303, \"CS-CT\": 0.08086715789520826},\n        \"tiff2rgba\": {\"ET-CS\": 0.19036475515437773, \"ET-CT\": 0.18802272660380803, \"CS-CT\": 0.09256748900522595},\n        \"susan-e\": {\"ET-CS\": 0.1362765512460971, \"ET-CT\": 0.36116979864249992, \"CS-CT\": 0.05943189644484737},\n    }\n    \n    mysql_dcor_values_dict = {\n        \"SiBench\": {\"T-L\": 0.4, \"T-CU\": 0.05, \"L-CU\": -0.13},\n        \"Voter\": {\"T-L\": 0.2, \"T-CU\": 0.03, \"L-CU\": -0.14},\n        \"SmallBank\": {\"T-L\": 0.6, \"T-CU\": 0.24, \"L-CU\": -0.35},\n        \"Twitter\": {\"T-L\": 0.25, \"T-CU\": 0.43, \"L-CU\": -0.02},\n        \"TATP\": {\"T-L\": 0.14, \"T-CU\": 0.05, \"L-CU\": -0.13},\n        \"TPC-C\": {\"T-L\": 0.23, \"T-CU\": 0.16, \"L-CU\": -0.34},\n    }\n\n    hadoop_dcor_values_dict = {\n        \"WordCount\": {\"ET-CU\": 0.5, \"ET-MU\": 0.14, \"CU-MU\": 0.03},\n        \"KMeans\": {\"ET-CU\": 0.6, \"ET-MU\": 0.05, \"CU-MU\": 0.02},\n        \"Bayes\": {\"ET-CU\": 0.4, \"ET-MU\": 0.23, \"CU-MU\": 0.4},\n        \"NWeight\": {\"ET-CU\": 0.5, \"ET-MU\": 0.2, \"CU-MU\": 0.4},\n        \"PageRank\": {\"ET-CU\": 0.13, \"ET-MU\": 0.35, \"CU-MU\": 0.16},\n        \"TeraSort\": {\"ET-CU\": 0.4, \"ET-MU\": 0.16, \"CU-MU\": 0.15},\n    }\n\n    # generate_grid_plot(gcc_dcor_values_dict, \"gcc\")\n    # generate_grid_plot(mysql_dcor_values_dict, \"mysql\")\n    # generate_grid_plot(hadoop_dcor_values_dict, \"hadoop\")\n    \n    generate_grid_plot_combine([gcc_dcor_values_dict, mysql_dcor_values_dict, hadoop_dcor_values_dict])"
  },
  {
    "path": "demo/importances/get_feature_importances.py",
    "content": "import sys\nfrom pathlib import Path\n\ncurrent_dir = Path(__file__).resolve().parent\npackage_dir = current_dir.parent.parent\nsys.path.insert(0, str(package_dir))\n\nimport json\nimport os\nimport tarfile\nfrom pathlib import Path\n\nimport numpy as np\nimport pandas as pd\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.linear_model import Lasso\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.tree import DecisionTreeRegressor\n\n# from csstuning.compiler.compiler_benchmark import GCCBenchmark\n\n# data_path = package_dir / \"experiment_results\" / \"gcc_samples\"\n# data_path = package_dir / \"experiment_results\" / \"gcc_samples\"\ndata_path = package_dir / \"experiment_results\" / \"dbms_sampling\"\n\n\ndef load_and_prepare_data(file_path, objectives):\n    \"\"\"\n    Loads JSON data and prepares a DataFrame.\n    \"\"\"\n    with open(file_path, \"r\") as f:\n        data = json.load(f)\n\n    input_vectors = data[\"input_vector\"]\n    output_vectors = data[\"output_value\"]\n\n    df_input = pd.DataFrame(input_vectors)\n\n    df_output = pd.DataFrame(output_vectors)[objectives]\n    df_combined = pd.concat([df_input, df_output], axis=1)\n    # print(f\"Loaded {len(df_combined)} data points\")\n\n    df_combined = df_combined.drop_duplicates(subset=df_input.columns.tolist())\n    # print(f\"Removed {len(df_combined) - len(df_input)} duplicates\")\n\n    # for obj in objectives:\n    #     df_combined = df_combined[df_combined[obj] != 1e10]\n    # print(f\"Loaded {len(df_combined)} data points after removing extreme values\")\n    return df_combined\n\n\ndef calculate_feature_importances(df, objective):\n    \"\"\"\n    Calculates and returns feature importances.\n    \"\"\"\n    X = df.drop([objective], axis=1)\n    y = df[objective]\n\n    model = DecisionTreeRegressor()\n    model.fit(X, y)\n    feature_importances = model.feature_importances_\n\n    feature_importance_df = pd.DataFrame(\n        {\"Feature\": X.columns, \"Importance\": feature_importances}\n    )\n    return feature_importance_df\n\n\ndef aggregate_importances(importances_list):\n    \"\"\"\n    Aggregates a list of importance dataframes by taking the mean of importance scores across all repetitions.\n    \"\"\"\n    combined_importances = pd.concat(importances_list)\n    mean_importances = combined_importances.groupby(\"Feature\").mean().reset_index()\n    return mean_importances.sort_values(by=\"Importance\", ascending=False)\n\n\ndef combine_and_rank_features(importances_list):\n    \"\"\"\n    Combines feature importance dataframes and ranks features by total importance across all objectives.\n    \"\"\"\n    combined = pd.concat(importances_list)\n    combined = (\n        combined.groupby(\"Feature\")\n        .sum()\n        .sort_values(by=\"Importance\", ascending=False)\n        .reset_index()\n    )\n    return combined\n\n\ndef get_top_combined_features(common_features, combined_ranked, total_features=20):\n    \"\"\"\n    Supplements the common features with additional features from the combined ranking to reach the desired total.\n    \"\"\"\n    final_features = list(common_features)\n\n    # Add more features from the combined ranked list until you reach 20\n    for feature in combined_ranked[\"Feature\"]:\n        if len(final_features) < total_features:\n            if (\n                feature not in common_features\n            ):  # Only add if not already in common_features\n                final_features.append(feature)\n        else:\n            break  # Stop if we have already 20 features\n\n    return final_features\n\n\ndef find_common_features(importances_list):\n    \"\"\"\n    Finds the intersection of important features from multiple importance dataframes.\n    \"\"\"\n    top_feature_sets = []\n\n    for df in importances_list:\n        # Sort by importance and select the top 20 features\n        top_features = df.sort_values(by=\"Importance\", ascending=False).head(20)\n        # Add the set of top 20 feature names to the list\n        top_feature_sets.append(set(top_features[\"Feature\"]))\n        \n        # print importances\n        print(\"Top 20 Features:\")\n        print(top_features)\n\n    # Find intersection of all top feature sets\n    common_features = set.intersection(*top_feature_sets)\n  \n    # # print feature and importances\n    # print(\"Common Features:\")\n    # print(df[df[\"Feature\"].isin(common_features)])\n    \n    return list(common_features)\n\n\ndef train_and_evaluate_model(\n    df, features, objective, use_top_features=False, random_state=42\n):\n    \"\"\"\n    Trains and evaluates a model, either using top 20 features or all features.\n    \"\"\"\n    X = df[features[\"Feature\"]] if use_top_features else df.drop([objective], axis=1)\n    y = df[objective]\n\n    # Split and train the model\n    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\n    model = DecisionTreeRegressor()\n    model.fit(X_train, y_train)\n    y_pred = model.predict(X_test)\n\n    # Evaluate the model\n    nrmse = np.sqrt(mean_squared_error(y_test, y_pred)) / np.std(y_test)\n    feature_set = \"Top 20 Features\" if use_top_features else \"All Features\"\n    print(f\"{feature_set} - Normalized RMSE: {nrmse}\")\n\n    # Get and sort feature importances\n    feature_importances = model.feature_importances_\n    sorted_features = pd.DataFrame(\n        {\"Feature\": X.columns, \"Importance\": feature_importances}\n    ).sort_values(by=\"Importance\", ascending=False)\n\n    # print(\"Sorted Feature Importances:\")\n    # print(sorted_features)\n\n    return nrmse\n\n\ndef get_workloads_improved():\n    \"\"\"\n    Returns a list of workloads that improved when including objectives.\n    \"\"\"\n    iterations = 1\n    workloads_improved = []\n    \n    \n    workloads_sampled = []\n    \n    for file in data_path.glob(\"*.json\"):\n        workload = file.name.split(\".\")[0][4:]\n        \n        workloads_sampled.append(workload)\n        print(\"==================================================\")\n        print(workload)\n        print(\"==================================================\")\n\n        # Initialize lists to store the results of repeated experiments\n        nrmse_excluding_list = []\n        nrmse_including_list = []\n\n        for i in range(iterations):\n            random_state = 42 + i\n            print(f\"Running iteration {i+1}/{iterations}...\")\n\n            # Repeat the experiment for 'excluding objectives'\n            print(\"CART with top 20 features, excluding objectives\")\n            df_combined = load_and_prepare_data(file, objectives=[\"execution_time\"])\n            \n        \n            important_features = calculate_feature_importances(\n                df_combined, \"execution_time\"\n            )\n            nrmse_excluding = train_and_evaluate_model(\n                df_combined,\n                important_features,\n                \"execution_time\",\n                use_top_features=True,\n                random_state=random_state,\n            )\n            nrmse_excluding_list.append(nrmse_excluding)\n            print(\"\\n\")\n\n            # Repeat the experiment for 'including objectives'\n            print(\"CART with top 20 features, including objectives\")\n            df_combined = load_and_prepare_data(\n                file, objectives=[\"execution_time\", \"file_size\", \"compilation_time\"]\n            )\n            important_features = calculate_feature_importances(\n                df_combined, \"execution_time\"\n            )\n            nrmse_including = train_and_evaluate_model(\n                df_combined,\n                important_features,\n                \"execution_time\",\n                use_top_features=True,\n                random_state=random_state,\n            )\n            nrmse_including_list.append(nrmse_including)\n            print(\"\\n\")\n\n\n        \n        # Calculate average or median NRMSE for both configurations\n        avg_nrmse_excluding = np.mean(nrmse_excluding_list)\n        avg_nrmse_including = np.mean(nrmse_including_list)\n\n        # Compare and record improvements\n        if avg_nrmse_including < avg_nrmse_excluding:\n            workloads_improved.append(workload)\n        print(f\"Average Improvement: {avg_nrmse_excluding - avg_nrmse_including}\")\n        print(\"\\n\\n\")\n\n    print(f\"Workloads improved: {workloads_improved}\")\n\n    return workloads_improved\n\n\ndef get_features_for_exp(workloads, repetitions=5):\n    features_by_workload = {}\n\n    for workload in workloads:\n        print(\"==================================================\")\n        print(workload)\n        print(\"==================================================\")\n        data_file = data_path / f\"DBMS_{workload}.json\"\n        # data_file = data_path / f\"GCC_{workload}.json\"\n        # data_file = data_path / f\"LLVM_{workload}.json\"\n        features_by_workload[workload] = {}\n\n        # Calculate feature importances for each objective\n        importances_et_all, importances_ct_all, importances_fs_all = [], [], []\n        for _ in range(repetitions):\n            # Repeat the experiment and append the results\n            # df_combined = load_and_prepare_data(\n            #     data_file, objectives=[\"execution_time\"]\n            # )\n            # importances_et_all.append(\n            #     calculate_feature_importances(df_combined, \"execution_time\")\n            # )\n\n            # df_combined = load_and_prepare_data(\n            #     data_file, objectives=[\"compilation_time\"]\n            # )\n            # importances_ct_all.append(\n            #     calculate_feature_importances(df_combined, \"compilation_time\")\n            # )\n\n            # df_combined = load_and_prepare_data(data_file, objectives=[\"file_size\"])\n            # importances_fs_all.append(\n            #     calculate_feature_importances(df_combined, \"file_size\")\n            # )\n\n            df_combined = load_and_prepare_data(\n                data_file, objectives=[\"throughput\"]\n            )\n            importances_et_all.append(\n                calculate_feature_importances(df_combined, \"throughput\")\n            )\n\n            df_combined = load_and_prepare_data(\n                data_file, objectives=[\"latency\"]\n            )\n            importances_ct_all.append(\n                calculate_feature_importances(df_combined, \"latency\")\n            )\n\n\n        # Aggregate the importances from all repetitions\n        importances_et = aggregate_importances(importances_et_all)\n        importances_ct = aggregate_importances(importances_ct_all)\n        # importances_fs = aggregate_importances(importances_fs_all)\n\n        # Find common features across all objectives\n        common_features = find_common_features(\n            [importances_et, importances_ct]\n        )\n        # print(\"Top 20 Features (Common):\")\n        # print(common_features)\n        features_by_workload[workload][\"common\"] = common_features\n\n        # Combine and rank features by total importance across all objectives\n        combined_ranked = combine_and_rank_features(\n            [importances_et, importances_ct]\n        )\n    \n        # Get top combined features, ensuring we have 20 total\n        top_features = get_top_combined_features(common_features, combined_ranked)\n\n        # print(\"Top 20 Features (Common + Supplemented):\")\n        # print(top_features)\n        features_by_workload[workload][\"top\"] = top_features\n\n        # Write feature importances to file\n\n    with open(\"features_by_workload.json\", \"w\") as fp:\n        json.dump(features_by_workload, fp, indent=4)\n\n    # print(\"Features by workload written to features_by_workload.json\")\n\n\nif __name__ == \"__main__\":\n    # workloads_improved = get_workloads_improved()\n\n    # workloads_improved = [\n    #     \"cbench-security-sha\",\n    #     \"cbench-telecom-crc32\",\n    #     \"cbench-network-patricia\",\n    #     \"cbench-office-stringsearch2\",\n    #     \"cbench-bzip2\",\n    #     \"cbench-security-rijndael\",\n    #     \"cbench-automotive-bitcount\",\n    #     \"cbench-consumer-tiff2bw\",\n    #     \"cbench-security-pgp\",  // Error compiled with LLVM\n    #     \"cbench-consumer-tiff2rgba\",\n    #     \"cbench-automotive-susan-e\",\n    #     \"cbench-telecom-adpcm-d\",\n    #     \"cbench-telecom-adpcm-c\",\n    #     \"cbench-telecom-gsm\",\n    # ]\n\n    # GCC\n    workloads_improved = [\n        \"cbench-consumer-tiff2rgba\",\n        \"cbench-security-rijndael\",\n        \"cbench-security-pgp\",\n        \"cbench-automotive-qsort1\",\n        \"cbench-automotive-susan-e\",\n        \"cbench-consumer-jpeg-d\",\n        \"cbench-security-sha\",\n        \"cbench-telecom-adpcm-c\",\n        \"cbench-telecom-adpcm-d\",\n        \"cbench-telecom-gsm\",\n        \n        \"cbench-telecom-crc32\",\n        \"cbench-consumer-tiff2bw\",\n        \"cbench-consumer-mad\",\n        \"cbench-network-patricia\",\n\n        # \"polybench-cholesky\",\n        # \"polybench-fdtd-apml\",\n        # \"polybench-symm\",\n        # \"polybench-ludcmp\",\n        # \"polybench-lu\",\n        # \"polybench-bicg\",\n        \n        \n        # \"cbench-bzip2\",\n        # \"cbench-office-stringsearch2\",\n    ]\n    \n    workloads_gcc_extra = [\n        \"polybench-3mm\",\n        \"cbench-automotive-susan-c\",\n        \"cbench-consumer-tiff2dither\",\n        \"cbench-automotive-bitcount\",\n        \"polybench-2mm\",\n        \"polybench-adi\",\n        \"cbench-office-stringsearch2\",\n        \"polybench-fdtd-2d\",\n        \"polybench-atax\",\n        \"polybench-doitgen\",\n        \"polybench-durbin\",\n        \"polybench-fdtd-apml\",\n        \"polybench-gemver\",\n        \"polybench-gesummv\",      \n    ]\n    \n    # LLVM\n    workloads_improved = [\n        \"cbench-telecom-gsm\",\n        \"cbench-automotive-qsort1\",\n        \"cbench-automotive-susan-e\",\n        \"cbench-consumer-tiff2rgba\",\n        \"cbench-network-patricia\",\n        \"cbench-automotive-bitcount\",\n        \"cbench-bzip2\",\n        \"cbench-consumer-tiff2bw\",\n        \"cbench-consumer-jpeg-d\",\n        \"cbench-telecom-adpcm-c\",\n        \"cbench-telecom-adpcm-d\",\n        \"cbench-office-stringsearch2\",\n        \"cbench-security-rijndael\",\n        \"cbench-security-sha\",\n    ]\n   \n    workloads_dbms = [\n        \"sibench\",\n        \"smallbank\",\n        \"tatp\",\n        \"tpcc\",\n        \"twitter\",\n        \"voter\"\n    ] \n    get_features_for_exp(workloads_dbms)\n"
  },
  {
    "path": "demo/jacard_exec_times.csv",
    "content": "1000,2000,3000,4000,5000,6000,7000,8000,10000\n0.9119875431060791,2.082753896713257,3.91093111038208,8.557840585708618,12.041692018508911,15.124170541763306,19.65719509124756,23.64489245414734,34.56616735458374\n1.0272552967071533,2.4850564002990723,3.6993303298950195,8.74350357055664,11.522526741027832,15.167948007583618,18.998569011688232,23.57488775253296,34.72972536087036\n0.8701906204223633,2.342695474624634,3.9779343605041504,8.43106198310852,11.420814752578735,16.141853094100952,20.375312566757202,23.408360719680786,33.90605902671814\n0.9109225273132324,2.144861936569214,3.3771023750305176,8.592206239700317,11.48600172996521,14.987765550613403,19.385486602783203,25.1193208694458,33.94826626777649\n0.9816443920135498,2.1542670726776123,3.5521466732025146,8.509221315383911,11.628002643585205,14.858725309371948,19.51885724067688,24.59690284729004,33.040677547454834\n0.9135477542877197,1.9848182201385498,3.598935127258301,8.426192045211792,11.95021915435791,15.61799430847168,19.14977765083313,24.9086012840271,33.726221799850464\n0.8438146114349365,2.085094690322876,3.576479434967041,8.02638292312622,10.94989275932312,15.241029739379883,19.54108452796936,25.013059616088867,32.88712763786316\n0.8688364028930664,2.180650472640991,3.596482276916504,8.604441404342651,11.128572225570679,15.095619678497314,18.685551643371582,25.701316595077515,34.67376089096069\n0.8366460800170898,2.0028655529022217,3.4182419776916504,8.652469873428345,11.459563732147217,15.82176685333252,19.65123200416565,25.646445989608765,32.839797019958496\n0.8481349945068359,2.1892166137695312,3.4449169635772705,8.446269035339355,11.985326766967773,14.893955945968628,19.16008687019348,25.730915069580078,32.78506088256836\n0.9579446315765381,2.219630718231201,3.8541202545166016,8.73814082145691,11.567262411117554,15.347697496414185,19.48912000656128,25.423113346099854,33.98611545562744\n0.9066781997680664,2.083017349243164,3.588240385055542,8.700974941253662,11.331908941268921,15.30103087425232,19.335123538970947,24.858865976333618,33.50572729110718\n0.9746880531311035,2.2279622554779053,4.053081274032593,8.730259418487549,11.936275482177734,15.455094814300537,19.327856302261353,25.628153085708618,32.99900460243225\n0.8930683135986328,2.11864972114563,3.7024741172790527,8.344207286834717,11.746542930603027,15.088658809661865,19.43919348716736,25.075968742370605,33.45540761947632\n0.856304407119751,2.086219310760498,3.6951184272766113,8.34026026725769,11.830772161483765,14.942124843597412,18.9264714717865,25.86812424659729,33.858113288879395\n0.8727133274078369,2.0580806732177734,3.7264912128448486,8.894163370132446,12.052824258804321,14.960201978683472,19.055952310562134,25.78945779800415,32.88934874534607\n0.8761835098266602,2.228577136993408,3.4152305126190186,8.170103073120117,11.750890493392944,15.093035459518433,19.775015115737915,25.31575870513916,33.19825792312622\n0.8753976821899414,2.107287645339966,3.421565532684326,8.800944566726685,12.298073530197144,15.2935950756073,21.14627766609192,25.744181156158447,33.204394578933716\n0.8760173320770264,2.026487350463867,3.8564090728759766,9.433102130889893,11.492626905441284,15.162604808807373,20.254459142684937,25.106750011444092,33.05475950241089\n0.8822832107543945,1.976762294769287,3.630432605743408,9.591833591461182,11.193760871887207,15.723769426345825,21.67072629928589,25.18179702758789,33.00413703918457\n"
  },
  {
    "path": "demo/lsh_exec_times.csv",
    "content": "1000,2000,3000,4000,5000,6000,7000,8000,10000\n0.023784637451171875,0.08341550827026367,0.15126347541809082,0.33096837997436523,0.4911954402923584,0.515388011932373,0.690263032913208,0.9283351898193359,1.3463435173034668\n0.03152871131896973,0.10286164283752441,0.15152525901794434,0.3354344367980957,0.3960435390472412,0.5925371646881104,0.8368349075317383,0.7942116260528564,1.0759913921356201\n0.029398441314697266,0.09281778335571289,0.14369988441467285,0.33049607276916504,0.41103291511535645,0.5936400890350342,0.6794371604919434,0.8794887065887451,1.1260664463043213\n0.03680753707885742,0.0930788516998291,0.1341261863708496,0.29814672470092773,0.4796781539916992,0.5141637325286865,0.6429910659790039,0.8827624320983887,1.1970088481903076\n0.04126882553100586,0.058476924896240234,0.1401991844177246,0.2537209987640381,0.4159054756164551,0.572455883026123,0.6010398864746094,0.8189992904663086,1.230595350265503\n0.03595137596130371,0.08831262588500977,0.11515688896179199,0.28495216369628906,0.378218412399292,0.5232009887695312,0.8514833450317383,0.9663751125335693,1.2129037380218506\n0.04663252830505371,0.09302330017089844,0.13613414764404297,0.26587772369384766,0.45294785499572754,0.5516300201416016,0.6722183227539062,0.8564984798431396,1.0223979949951172\n0.019811153411865234,0.06587505340576172,0.12438511848449707,0.3414583206176758,0.4270482063293457,0.6210315227508545,0.6754748821258545,0.9284231662750244,1.2596681118011475\n0.032245635986328125,0.07642269134521484,0.13181066513061523,0.2506859302520752,0.46262025833129883,0.5530803203582764,0.6569054126739502,0.8058040142059326,1.119572639465332\n0.04127001762390137,0.0894927978515625,0.13326454162597656,0.3099794387817383,0.42696142196655273,0.5447602272033691,0.6076810359954834,0.9187808036804199,1.1909763813018799\n0.03832602500915527,0.08265113830566406,0.11252927780151367,0.2969646453857422,0.4582955837249756,0.5136613845825195,0.5503523349761963,0.9396407604217529,1.115455150604248\n0.03364896774291992,0.07457470893859863,0.211561918258667,0.3259725570678711,0.41351938247680664,0.5627808570861816,0.7018880844116211,0.8254928588867188,1.072371482849121\n0.06022810935974121,0.10837268829345703,0.14995479583740234,0.3509492874145508,0.45815014839172363,0.5309557914733887,0.5962278842926025,0.8092830181121826,1.0732884407043457\n0.03121781349182129,0.07620978355407715,0.1468205451965332,0.33223938941955566,0.37134265899658203,0.5799720287322998,0.6643342971801758,0.8704044818878174,1.0601885318756104\n0.03435707092285156,0.06217193603515625,0.13513994216918945,0.3275594711303711,0.35704994201660156,0.47503113746643066,0.644827127456665,0.8879103660583496,1.2539973258972168\n0.03912711143493652,0.08903884887695312,0.12429618835449219,0.25405335426330566,0.5083765983581543,0.5449907779693604,0.7324926853179932,0.8082249164581299,1.013197660446167\n0.03409695625305176,0.06731986999511719,0.1349468231201172,0.30829381942749023,0.42665600776672363,0.5346364974975586,0.6694869995117188,1.0194809436798096,1.2974910736083984\n0.04721331596374512,0.07333040237426758,0.13997197151184082,0.28172802925109863,0.40156102180480957,0.6314287185668945,0.8593540191650391,0.8863487243652344,1.231271743774414\n0.03896808624267578,0.08074784278869629,0.14035487174987793,0.3700544834136963,0.40504980087280273,0.581559419631958,0.7495737075805664,0.8647575378417969,1.1768264770507812\n0.03593301773071289,0.08340692520141602,0.13669967651367188,0.28766536712646484,0.38652944564819336,0.5708668231964111,0.7684242725372314,0.9323217868804932,1.3054776191711426\n"
  },
  {
    "path": "demo/random_sample_compiler.py",
    "content": "import os\nimport sys\n\ncurrent_dir = os.path.dirname(os.path.abspath(__file__))\npackage_dir = os.path.dirname(current_dir)\nsys.path.insert(0, package_dir)\n\nimport argparse\nimport datetime\n\nimport numpy as np\nfrom csstuning.compiler.compiler_benchmark import CompilerBenchmarkBase\n\nfrom transopt.Benchmark import construct_test_suits\nfrom transopt.KnowledgeBase.kb_builder import construct_knowledgebase\nfrom transopt.KnowledgeBase.TaskDataHandler import OptTaskDataHandler\nfrom optimizer.construct_optimizer import get_optimizer\n\nos.environ[\"MKL_NUM_THREADS\"] = \"1\"\nos.environ[\"NUMEXPR_NUM_THREADS\"] = \"1\"\nos.environ[\"OMP_NUM_THREADS\"] = \"1\"\n\n\ndef run_experiments(tasks, args):\n    kb = construct_knowledgebase(args)\n    testsuits = construct_test_suits(tasks, args.seed)\n    optimizer = get_optimizer(args)\n    data_handler = OptTaskDataHandler(kb, args)\n    optimizer.optimize(testsuits, data_handler)\n\n\ndef split_into_segments(lst, n):\n    k, m = divmod(len(lst), n)\n    return [lst[i * k + min(i, m) : (i + 1) * k + min(i + 1, m)] for i in range(n)]\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\n        \"samples_num\",\n        type=int,\n        help=\"Number of samples to be collected for each workload\",\n    )\n    parser.add_argument(\n        \"--split_index\",\n        type=int,\n        help=\"Index for splitting the workload segments\",\n        default=0,\n    )\n    args = parser.parse_args()\n    split_index = args.split_index\n    samples_num = args.samples_num\n\n    available_workloads = CompilerBenchmarkBase.AVAILABLE_WORKLOADS\n    collected_workloads = [\n        \"cbench-automotive-susan-c\",\n        \"cbench-automotive-bitcount\",\n        \"cbench-security-rijndael\",\n        \"cbench-consumer-tiff2rgba\",\n        \"cbench-telecom-adpcm-d\",\n        \"cbench-consumer-tiff2bw\",\n        \"cbench-telecom-adpcm-c\",\n        \"cbench-consumer-tiff2dither\",\n        \"cbench-telecom-gsm\",\n        \"cbench-automotive-susan-e\",\n        \"cbench-security-sha\",\n        \"cbench-network-patricia\",\n        \"cbench-telecom-crc32\",\n        \"cbench-security-pgp\",\n        \"cbench-consumer-mad\",\n        \"cbench-automotive-qsort1\",\n        \"polybench-cholesky\",\n        \"polybench-trisolv\",\n        \"polybench-adi\",\n        \"polybench-symm\",\n        \"polybench-gesummv\",\n        \"polybench-gemver\",\n        \"polybench-durbin\",\n        \"polybench-atax\",\n        \"polybench-fdtd-apml\",\n        \"polybench-jacobi-1d-imper\",\n        \"polybench-bicg\",\n        \"polybench-syr2k\",\n        \"polybench-mvt\",\n        \"polybench-lu\",\n        \"polybench-3mm\",\n    ]\n    available_workloads = list(set(available_workloads) - set(collected_workloads))\n\n    split_workloads = split_into_segments(available_workloads, 10)\n\n    if split_index >= len(split_workloads):\n        raise IndexError(\"split index out of range\")\n\n    workloads = split_workloads[split_index]\n\n    tasks = {\n        \"GCC\": {\"budget\": samples_num, \"workloads\": workloads},\n        # \"LLVM\": {\"budget\": samples_num, \"workloads\": workloads},\n    }\n\n    # Get date and set exp name\n    date = datetime.datetime.now().strftime(\"%Y-%m-%d-%H-%M-%S\")\n    exp_name = f\"sampling_compiler_{date}\"\n\n    args = argparse.Namespace(\n        seed=0,\n        optimizer=\"ParEGO\",\n        init_number=2,\n        init_method=\"random\",\n        exp_path=f\"{package_dir}/../experiment_results\",\n        exp_name=exp_name,\n        verbose=True,\n        normalize=\"norm\",\n        source_num=2,\n        selector=\"None\",\n        save_mode=1,\n        load_mode=False,\n        acquisition_func=\"LCB\",\n    )\n\n    run_experiments(tasks, args)\n"
  },
  {
    "path": "demo/random_sample_dbms.py",
    "content": "current_dir = os.path.dirname(os.path.abspath(__file__))\npackage_dir = os.path.dirname(current_dir)\nsys.path.insert(0, package_dir)\n\nimport argparse\nimport datetime\nimport os\nimport sys\n\nimport numpy as np\nfrom csstuning.dbms.dbms_benchmark import MySQLBenchmark\n\nfrom transopt.benchmark import instantiate_problems\nfrom transopt.KnowledgeBase.kb_builder import construct_knowledgebase\nfrom transopt.KnowledgeBase.TaskDataHandler import OptTaskDataHandler\nfrom optimizer.construct_optimizer import get_optimizer\n\nos.environ[\"MKL_NUM_THREADS\"] = \"1\"\nos.environ[\"NUMEXPR_NUM_THREADS\"] = \"1\"\nos.environ[\"OMP_NUM_THREADS\"] = \"1\"\n\n\ndef run_experiments(tasks, args):\n    kb = construct_knowledgebase(args)\n    testsuits = instantiate_problems(tasks, args.seed)\n    optimizer = get_optimizer(args)\n    data_handler = OptTaskDataHandler(kb, args)\n    optimizer.optimize(testsuits, data_handler)\n\n\ndef split_into_segments(lst, n):\n    k, m = divmod(len(lst), n)\n    return [lst[i * k + min(i, m) : (i + 1) * k + min(i + 1, m)] for i in range(n)]\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\n        \"--samples_num\", type=int, help=\"Number of samples to be collected for each workload\", default=6\n    )\n    parser.add_argument(\n        \"--split_index\", type=int, help=\"Index for splitting the workload segments\", default=0\n    )\n    args = parser.parse_args()\n    split_index = args.split_index\n    samples_num = args.samples_num\n    \n    available_workloads = MySQLBenchmark.AVAILABLE_WORKLOADS\n    split_workloads = split_into_segments(available_workloads, 6)\n\n    if split_index >= len(split_workloads):\n        raise IndexError(\"split index out of range\")\n\n    workloads = split_workloads[split_index]\n\n    tasks = {\n        \"DBMS\": {\"budget\": samples_num, \"workloads\": workloads},\n    }\n\n    # Get date and set exp name\n    date = datetime.datetime.now().strftime(\"%Y-%m-%d-%H-%M-%S\")\n    exp_name = f\"sampling_dbms_{date}\"\n\n    args = argparse.Namespace(\n        seed=0,\n        optimizer=\"ParEGO\",\n        init_number=2,\n        init_method=\"random\",\n        exp_path=f\"{package_dir}/../experiment_results\",\n        exp_name=exp_name,\n        verbose=True,\n        normalize=\"norm\",\n        source_num=2,\n        selector=\"None\",\n        save_mode=1,\n        load_mode=False,\n        acquisition_func=\"LCB\",\n    )\n\n    run_experiments(tasks, args)\n"
  },
  {
    "path": "demo/sampling/random_sample_compiler.py",
    "content": "import os\nimport sys\nfrom pathlib import Path\n\ncurrent_dir = Path(__file__).resolve().parent\npackage_dir = current_dir.parent.parent\nsys.path.insert(0, str(package_dir))\n\nimport argparse\nimport datetime\n\nimport numpy as np\nfrom csstuning.compiler.compiler_benchmark import CompilerBenchmarkBase\n\nfrom transopt.benchmark import instantiate_problems\nfrom transopt.KnowledgeBase.kb_builder import construct_knowledgebase\nfrom transopt.KnowledgeBase.TransferDataHandler import OptTaskDataHandler\nfrom optimizer.construct_optimizer import get_optimizer\n\nos.environ[\"MKL_NUM_THREADS\"] = \"1\"\nos.environ[\"NUMEXPR_NUM_THREADS\"] = \"1\"\nos.environ[\"OMP_NUM_THREADS\"] = \"1\"\n\n\ndef run_experiments(tasks, args):\n    kb = construct_knowledgebase(args)\n    testsuits = instantiate_problems(tasks, args.seed)\n    optimizer = get_optimizer(args)\n    data_handler = OptTaskDataHandler(kb, args)\n    optimizer.optimize(testsuits, data_handler)\n\n\ndef split_into_segments(lst, n):\n    k, m = divmod(len(lst), n)\n    return [lst[i * k + min(i, m) : (i + 1) * k + min(i + 1, m)] for i in range(n)]\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\n        \"--samples_num\",\n        type=int,\n        help=\"Number of samples to be collected for each workload\",\n        default=5000,\n    )\n    parser.add_argument(\n        \"--split_index\",\n        type=int,\n        help=\"Index for splitting the workload segments\",\n        default=0,\n    )\n    args = parser.parse_args()\n    split_index = args.split_index\n    samples_num = args.samples_num\n\n    available_workloads = CompilerBenchmarkBase.AVAILABLE_WORKLOADS\n    # available_workloads = [\n    #     \"polybench-jacobi-2d-imper\",\n    #     \"polybench-dynprog\",\n    #     \"polybench-medley-reg-detect\",\n    #     \"polybench-trmm\",\n    #     \"polybench-gemm\",\n    #     \"cbench-automotive-susan-s\",\n    #     \"cbench-network-dijkstra\",\n    #     \"cbench-consumer-jpeg-c\",\n    #     \"cbench-bzip2\",\n    # ]\n\n    split_workloads = split_into_segments(available_workloads, 10)\n\n    if split_index >= len(split_workloads):\n        raise IndexError(\"split index out of range\")\n\n    workloads = split_workloads[split_index]\n\n    tasks = {\n        # \"GCC\": {\"budget\": samples_num, \"workloads\": workloads},\n        \"LLVM\": {\"budget\": samples_num, \"workloads\": workloads},\n    }\n\n    # Get date and set exp name\n    date = datetime.datetime.now().strftime(\"%Y-%m-%d-%H-%M-%S\")\n    exp_name = f\"sampling_compiler_{date}\"\n\n    args = argparse.Namespace(\n        seed=0,\n        optimizer=\"ParEGO\",\n        init_number=100,\n        init_method=\"random\",\n        exp_path=f\"{package_dir}/experiment_results\",\n        exp_name=exp_name,\n        verbose=True,\n        normalize=\"norm\",\n        source_num=2,\n        selector=None,\n        save_mode=1,\n        load_mode=False,\n        acquisition_func=\"LCB\",\n    )\n\n    run_experiments(tasks, args)\n"
  },
  {
    "path": "demo/sampling/random_sample_dbms.py",
    "content": "import os\nimport sys\nfrom pathlib import Path\n\ncurrent_dir = Path(__file__).resolve().parent\npackage_dir = current_dir.parent.parent\nsys.path.insert(0, str(package_dir))\n\nimport argparse\nimport datetime\nimport os\nimport sys\n\nimport numpy as np\nfrom csstuning.dbms.dbms_benchmark import MySQLBenchmark\n\nfrom transopt.benchmark import instantiate_problems\nfrom transopt.KnowledgeBase.kb_builder import construct_knowledgebase\nfrom transopt.KnowledgeBase.TransferDataHandler import OptTaskDataHandler\nfrom optimizer.construct_optimizer import get_optimizer\n\nos.environ[\"MKL_NUM_THREADS\"] = \"1\"\nos.environ[\"NUMEXPR_NUM_THREADS\"] = \"1\"\nos.environ[\"OMP_NUM_THREADS\"] = \"1\"\n\n\ndef run_experiments(tasks, args):\n    kb = construct_knowledgebase(args)\n    testsuits = instantiate_problems(tasks, args.seed)\n    optimizer = get_optimizer(args)\n    data_handler = OptTaskDataHandler(kb, args)\n    optimizer.optimize(testsuits, data_handler)\n\n\ndef split_into_segments(lst, n):\n    k, m = divmod(len(lst), n)\n    return [lst[i * k + min(i, m) : (i + 1) * k + min(i + 1, m)] for i in range(n)]\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\n        \"--samples_num\", type=int, help=\"Number of samples to be collected for each workload\", default=10\n    )\n    parser.add_argument(\n        \"--split_index\", type=int, help=\"Index for splitting the workload segments\", default=0\n    )\n    args = parser.parse_args()\n    split_index = args.split_index\n    samples_num = args.samples_num\n    \n    available_workloads = MySQLBenchmark.AVAILABLE_WORKLOADS\n    split_workloads = split_into_segments(available_workloads, 6)\n\n    if split_index >= len(split_workloads):\n        raise IndexError(\"split index out of range\")\n\n    workloads = split_workloads[split_index]\n\n    tasks = {\n        \"DBMS\": {\"budget\": samples_num, \"workloads\": workloads},\n    }\n\n    # Get date and set exp name\n    date = datetime.datetime.now().strftime(\"%Y-%m-%d-%H-%M-%S\")\n    exp_name = f\"sampling_dbms_{date}\"\n\n    args = argparse.Namespace(\n        seed=0,\n        optimizer=\"ParEGO\",\n        init_number=10,\n        init_method=\"random\",\n        exp_path=f\"{package_dir}/experiment_results\",\n        exp_name=exp_name,\n        verbose=True,\n        normalize=\"norm\",\n        source_num=2,\n        selector=None,\n        save_mode=1,\n        load_mode=False,\n        acquisition_func=\"LCB\",\n    )\n\n    run_experiments(tasks, args)\n"
  },
  {
    "path": "docs/Makefile",
    "content": "# Minimal makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line, and also\n# from the environment for the first two.\nSPHINXOPTS    ?=\nSPHINXBUILD   ?= sphinx-build\nSOURCEDIR     = source\nBUILDDIR      = build\n\n# Put it first so that \"make\" without argument is like \"make help\".\nhelp:\n\t@$(SPHINXBUILD) -M help \"$(SOURCEDIR)\" \"$(BUILDDIR)\" $(SPHINXOPTS) $(O)\n\n.PHONY: help Makefile\n\n# Catch-all target: route all unknown targets to Sphinx using the new\n# \"make mode\" option.  $(O) is meant as a shortcut for $(SPHINXOPTS).\n%: Makefile\n\t@$(SPHINXBUILD) -M $@ \"$(SOURCEDIR)\" \"$(BUILDDIR)\" $(SPHINXOPTS) $(O)\n"
  },
  {
    "path": "docs/make.bat",
    "content": "@ECHO OFF\r\n\r\npushd %~dp0\r\n\r\nREM Command file for Sphinx documentation\r\n\r\nif \"%SPHINXBUILD%\" == \"\" (\r\n\tset SPHINXBUILD=sphinx-build\r\n)\r\nset SOURCEDIR=source\r\nset BUILDDIR=build\r\n\r\n%SPHINXBUILD% >NUL 2>NUL\r\nif errorlevel 9009 (\r\n\techo.\r\n\techo.The 'sphinx-build' command was not found. Make sure you have Sphinx\r\n\techo.installed, then set the SPHINXBUILD environment variable to point\r\n\techo.to the full path of the 'sphinx-build' executable. Alternatively you\r\n\techo.may add the Sphinx directory to PATH.\r\n\techo.\r\n\techo.If you don't have Sphinx installed, grab it from\r\n\techo.https://www.sphinx-doc.org/\r\n\texit /b 1\r\n)\r\n\r\nif \"%1\" == \"\" goto help\r\n\r\n%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%\r\ngoto end\r\n\r\n:help\r\n%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%\r\n\r\n:end\r\npopd\r\n"
  },
  {
    "path": "docs/source/_static/custom.css",
    "content": ".bd-sidebar-secondary {\n    display: none !important;\n}\n\n/* 让 bd-article 占据 100% 宽度 */\n.bd-main .bd-content {\n    flex-grow: 1;\n    max-width: 100%;\n    width: 100%;\n}\n\n.bd-article-container {\n    max-width: 100% !important;\n    width: 100% !important;\n}\n\n\n.bd-article {\n    max-width: 100% !important;\n    width: 100% !important;\n}\n\n.bd-sidebar-primary {\n    flex: 0 0 250px; /* 减小宽度 */\n    max-width: 250px;\n    padding: 0;\n}\n\n.bd-page-width {\n    max-width: 100% !important;\n    padding-left: 0 !important;\n    padding-right: 0 !important;\n}"
  },
  {
    "path": "docs/source/conf.py",
    "content": "# Configuration file for the Sphinx documentation builder.\n#\n# For the full list of built-in configuration values, see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Project information -----------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information\nimport os\nimport sys\nfrom os.path import dirname\n\n\n\nSOURCE = os.path.dirname(os.path.realpath(__file__))\n\n\n\nsys.path.insert(0, SOURCE)\n\nproject = 'TransOPT: Transfer Optimization System for Bayesian Optimization Using Transfer Learning'\ncopyright = '2024, Peili Mao'\nauthor = 'Peili Mao'\nrelease = '0.1.0'\n\n\n\n\n# -- General configuration ---------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration\n\nextensions =[\n    'sphinx.ext.autodoc',\n    'sphinx.ext.napoleon',\n    'sphinx_rtd_theme',\n    'sphinxcontrib.bibtex',\n    \n    'sphinx_togglebutton',\n    \n    'sphinx.ext.mathjax',\n    'sphinx.ext.autosummary',\n    # 'numpydoc',\n    # 'nbsphinx',\n    'sphinx.ext.intersphinx',\n    'sphinx.ext.coverage',\n    # 'matplotlib.sphinxext.plot_directive',\n    ]\n\ntemplates_path = ['_templates']\nexclude_patterns = []\n\nbibtex_bibfiles = ['usage/TOS.bib']\n\nhtml_logo = \"_static//figures/transopt_logo.jpg\"\n# html_favicon = '_static/favicon.ico'\n\n\n# -- Options for HTML output -------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output\n\nhtml_theme = 'sphinx_book_theme'\nhtml_static_path = ['_static']\nhtml_css_files = [\n    'custom.css',\n]\n\n\n\nmaster_doc = 'index'"
  },
  {
    "path": "docs/source/development/api_reference.rst",
    "content": "API Reference\n=============\n\nThis section provides a detailed reference for the TransOPT API, including descriptions of all available endpoints and methods.\n\n.. automodule:: transopt\n   :members:"
  },
  {
    "path": "docs/source/development/architecture.rst",
    "content": "Architecture Overview\n======================\n\nThis section provides an overview of the architecture of the TransOPT software, illustrating the key components and workflows involved in its operation.\n\nSystem Architecture\n-------------------\n\nThe following diagram provides a high-level view of the entire system architecture of TransOPT, showing the interaction between various components.\n\n.. image:: ../images/system_architecture.pdf\n   :alt: System Architecture Diagram\n   :width: 600px\n   :align: center\n\nWorkflow\n--------\n\nThe workflow for using TransOPT is illustrated below. This diagram shows the typical steps a user would follow when working with TransOPT, from defining the problem to obtaining the optimization results.\n\n.. image:: ../images/workflow.pdf\n   :alt: TransOPT Workflow\n   :width: 600px\n   :align: center\n\nOptimizer Architecture\n----------------------\n\nTransOPT includes different optimization algorithms. The following diagram highlights the difference between the standard Bayesian Optimization (BO) and Transfer Learning for Bayesian Optimization (TLBO).\n\n### BO vs. Transfer BO\n\n.. image:: ../images/bo_vs_tlbo.pdf\n   :alt: BO vs. Transfer BO\n   :width: 600px\n   :align: center\n\n### Optimizer Workflow\n\nThe diagram below illustrates the workflow of the optimizer component within TransOPT, showing how it integrates with other system components.\n\n.. image:: ../images/optimizer.pdf\n   :alt: Optimizer Workflow\n   :width: 600px\n   :align: center\n\nData Management\n---------------\n\nData management is a critical component of TransOPT, handling the storage, retrieval, and processing of data required for optimization tasks. The following diagram provides an overview of how data is managed within the system.\n\n.. image:: ../images/data_management.pdf\n   :alt: Data Management Overview\n   :width: 600px\n   :align: center\n\nConclusion\n----------\n\nThe architecture of TransOPT is designed to be modular and flexible, allowing for easy integration of new algorithms and data management strategies. This overview provides a snapshot of the system's key components and their interactions, setting the stage for more detailed exploration in subsequent sections.\n\n"
  },
  {
    "path": "docs/source/faq.rst",
    "content": "FAQ\n================================\n\nThis section addresses common questions and issues that users might encounter when using TransOPT.\n\nHow do I submit the error information to the maintainer?\n--------------------------------------------------------\nClick on the `Submit error` button on the bottom right corner of the dashboard page. Type in the error information and click on the `Submit` button, the error \ninformation will be sent to the maintainer.\n\n\n\nHow do I report a bug?\n----------------------\n1. Clone the repository:\n\n   ::\n\n     $ export NODE_OPTIONS=--openssl-legacy-provider\n\n"
  },
  {
    "path": "docs/source/home/feature.html",
    "content": "<link rel=\"stylesheet\" href=\"https://use.fontawesome.com/releases/v5.8.1/css/all.css\"\n      integrity=\"sha384-50oBUHEmvpQ+1lW4y57PTFmhCaXp0ML5d60M1M7uH2+nqUivzIebhndOJK28anvf\" crossorigin=\"anonymous\">\n\n<style>\n    #wrapper h4 {\n        margin: 5px 0px 15px 0px;\n        font-weight: 400;\n    }\n\n    .entry {\n        height: 100%;\n        width: 100%;\n    }\n\n    .border {\n        border: 1px solid #DCDCDC;\n    }\n\n    .icon {\n        margin-top: 5px;\n    }\n\n    .entry:hover {\n        background: #DCDCDC;\n        cursor: pointer;\n    }\n</style>\n\n<div id=\"wrapper\">\n    <div class=\"row row-eq-height\">\n        <div class=\"col-12 col-lg-6 p-1\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-cogs fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\">Composite Algorithm Design</p>\n                        <a>Search Space:</a> <a href=\"usage/algorithms.html\"> Automated prune search space, ...</a><br>\n                        <a>Initialization:</a> <a href=\"usage/algorithms.html\"> Meta-learn based initialization, EA-based initialization ...</a><br>\n                        <a>Surrogate Model:</a> <a href=\"usage/algorithms.html\"> MTGP, RGPE, Neural Process, ...</a><br>\n                        <a>Acquisition Function:</a> <a href=\"usage/algorithms.html\"> Transfre acquisition function, RL acquisition function, ...</a><br>\n                    </div>\n                </div>\n            </div>\n        </div>\n\n        <div class=\"col-12 col-lg-6 p-1\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-database fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a>Robust Data Management</a></p>\n                        <a>Embedded Database:</a> <a href=\"usage/algorithms.html\"> Utilizes SQLite as an embedded database, enabling seamless data management.</a><br>\n                        <a>Integration with External Datasets:</a> <a href=\"usage/algorithms.html\"> Allows integration of public datasets to enhance analysis.</a><br>\n                        <a>Data Retrieve:</a> <a href=\"usage/algorithms.html\"> Local-sensitivity based data retrieval approach.</a><br>\n                    </div>\n                </div>\n            </div>\n        </div>\n    </div>\n\n    <div class=\"row row-eq-height\">\n        <div class=\"col-12 col-lg-6 p-1\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-tasks fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a>Benchmark Problems</a></p>\n                        <a>Synthetic Problems:</a> <a href=\"usage/problems.html\"> Ackley, ...</a><br>\n                        <a>Configurable Software Tuning:</a> <a href=\"usage/problems.html\">GCC, LLVM, MySQL, Hadoop, ...</a><br>\n                        <a>Hyperparameter Optimization:</a> <a href=\"usage/problems.html\"> ResNet, DenseNet, AlexNet, ...</a><br>\n                        <a>Protein Inverse Folding:</a> <a href=\"usage/problems.html\">Protein Data Bank, CATH, ...</a>,\n                        <a>RNA Inverse Design:</a> <a href=\"usage/problems.html\">Eterna100, RNAStralign, Rfam-learn, ...</a>,\n                    </div>\n                </div>\n            </div>\n        </div>\n\n        <div class=\"col-12 col-lg-6 p-1\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-globe fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a>Web User Interface</a></p>\n                        <a>Intuitive Navigation:</a> <a href=\"usage/results.html\"> Clear menus and sidebars for easy access to features.</a><br>\n                        <a>Interactive Data Visualization:</a> <a href=\"usage/results.html\"> Real-time charts and graphs for data results.</a><br>\n                        <a>Data Upload Functionality:</a> <a href=\"usage/results.html\"> Direct upload of datasets for transfer and optimization.</a><br>\n                        <a>LLM-powered-chatbot:</a> <a href=\"usage/results.html\">Enables natural language interaction.</a><br>\n                    </div>\n                </div>\n            </div>\n        </div>\n    </div>\n\n    <div class=\"row row-eq-height\">\n        <div class=\"col-12 col-lg-6 p-1\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-chart-bar fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a>Results Analysis</a></p>\n                        <a>Performance Indicator:</a> <a href=\"usage/results.html\"> MAE, GC-content, Max RSS, ...</a><br>\n                        <a>Statistical Measures:</a> <a href=\"usage/results.html\"> Wilcoxon signed-rank test, Scott-Knott test, Critical difference, ...</a><br>\n                        <a>Visualization:</a> <a href=\"usage/results.html\"> Optimization trajectory, Multidimensional scaling, ...</a><br>\n\n                    </div>\n                </div>\n            </div>\n        </div>\n\n        <div class=\"col-12 col-lg-6 p-1\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-puzzle-piece fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\">More features are coming soon</p>\n                        <p>...</p>\n                    </div>\n                </div>\n            </div>\n        </div>\n    </div>\n</div>\n"
  },
  {
    "path": "docs/source/home/guide.html",
    "content": "<style>\n\n    .zoom:hover {\n        transform: scale(1.07);\n    }\n</style>\n\n\n<div class=\"container\">\n    <div class=\"row row-eq-height\">\n\n\n        <div class=\"col-md d-flex my-1 mx-2 overflow-hidden\">\n            <div class=\"card\">\n                <a href=\"installation.html\"><img class=\"card-img-top w-100 zoom\"\n                                                          src=\"_static/figures/giant.png\"\n                                                          alt=\"Icon Getting Started\"></a>\n                <div class=\"card-body\">\n                    <p class=\"card-text\"><b>Getting Started:</b> The key steps in using TransOPT: Installation, Algorithm Selection, Benchmarking Problems, Visualization, and Data Management for effective transfer learning optimization.</p>\n                </div>\n            </div>\n        </div>\n\n\n        <div class=\"col-md d-flex my-1 mx-2 overflow-hidden\">\n            <div class=\"card\">\n                <a href=\"https://colalab.ai/\" data-toggle=\"modal\" data-target=\"#colalab\"><img class=\"card-img-top w-100 zoom\"\n                                                                              src=\"_static/figures/colalab.png\"\n                                                                              alt=\"Icon colalab\"></a>\n                <div class=\"card-body\">\n                    <p class=\"card-text\"><b>About Us:</b> COLA laboratory is working in computational/artificial intelligence, multi-objective optimization and decision-making, operational research...</p>\n                </div>\n            </div>\n        </div>\n\n        <div class=\"col-md d-flex my-1 mx-2 overflow-hidden\">\n            <div class=\"card\">\n                <a href=\"\"><img class=\"card-img-top w-100 zoom\"\n                                                                              src=\"_static/figures/research.png\"\n                                                                              alt=\"Research with this Package\"></a>\n                <div class=\"card-body\">\n                    <p class=\"card-text\"><b>News:</b> \n                        Our system has been applied in various studies, including protein design, hyperparameter optimization... </p>\n                </div>\n            </div>\n        </div>\n\n\n    </div>\n</div>"
  },
  {
    "path": "docs/source/home/portfolio.html",
    "content": "<link rel=\"stylesheet\" href=\"https://use.fontawesome.com/releases/v5.8.1/css/all.css\"\n      integrity=\"sha384-50oBUHEmvpQ+1lW4y57PTFmhCaXp0ML5d60M1M7uH2+nqUivzIebhndOJK28anvf\" crossorigin=\"anonymous\">\n\n<style>\n\n    #wrapper h4 {\n        margin: 5px 0px 15px 0px;\n        font-weight: 400;\n    }\n\n    .entry {\n        height: 100%;\n        width: 100%;\n    }\n\n\n    .border {\n        border: 1px solid #DCDCDC;\n    }\n\n    .icon {\n        margin-top: 5px;\n    }\n\n    .entry:hover {\n        background: #DCDCDC;\n        cursor: pointer;\n    }\n\n\n\n\n</style>\n\n<div id=\"wrapper\">\n\n    <div class=\"row row-eq-height\">\n\n        <div class=\"col-12 col-lg-6 p-1\" onclick=\"location.href='interface/index.html';\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-bullhorn fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a href=\"interface/index.html\">Interface</a></p>\n\n                        <b>Function:</b>\n                        <a href=\"interface/minimize.html\">minimize</a>\n                        <p style=\"margin-bottom:0.4em;\"></p><b>Parameters:</b>\n                        <a href=\"problems/index.html\">Problem</a>,\n                        <a href=\"algorithms/index.html\">Algorithm</a>,\n                        <a href=\"interface/termination.html\">Termination</a>\n                        <p style=\"margin-bottom:0.4em;\"></p>\n\n                        <b>Optionals:</b>\n                        <a href=\"interface/callback.html\">Callback</a>,\n                        <a href=\"interface/display.html\">Display</a>,\n                        <a href=\"interface/minimize.html\">...</a>\n\n                        <p style=\"margin-bottom:0.4em;\"></p>\n\n                        <b>Returns:</b> <a href=\"interface/result.html\">Result</a>\n\n                        <br>\n                        <p style=\"margin-bottom:0.7em;\"></p>\n\n                        <b>Related:</b>\n                        <a href=\"algorithms/usage.html\">Ask and Tell</a><img class=\"new-flag\" src=\"_static/new-flag.svg\">,\n                        <a href=\"misc/checkpoint.html\">Checkpoints</a>\n                    </div>\n                </div>\n            </div>\n        </div>\n\n        <div class=\"col-12 col-lg-6 p-1\" onclick=\"location.href='problems/index.html';\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-chess fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a href=\"problems/index.html\">Problems</a></p>\n\n                        <b>Single-objective:</b>\n                        <a href=\"problems/single/ackley.html\">Ackley</a>,\n                        <a href=\"problems/single/griewank.html\">Griewank</a>,\n                        <a href=\"problems/single/rastrigin.html\">Rastrigin</a>,\n                        <a href=\"problems/single/rosenbrock.html\">Rosenbrock</a>,\n                        <a href=\"problems/single/zakharov.html\">Zakharov</a>,\n                        <a href=\"problems/index.html#Single-Objective\">...</a>\n                        <br>\n                        <p style=\"margin-bottom:0.4em;\"></p>\n\n\n                        <b>Multi-objective:</b>\n                        <a href=\"problems/multi/bnh.html\">BNH</a>,\n                        <a href=\"problems/multi/osy.html\">OSY</a>,\n                        <a href=\"problems/multi/tnk.html\">TNK</a>,\n                        <a href=\"problems/multi/truss2d.html\">Truss2d</a>,\n                        <a href=\"problems/multi/welded_beam.html\">Welded Beam</a>,\n                        <a href=\"problems/multi/zdt.html\">ZDT</a>,\n                        <a href=\"problems/index.html#Multi-Objective\">...</a>\n                        <br>\n                        <p style=\"margin-bottom:0.4em;\"></p>\n\n\n                        <b>Many-objective:</b>\n                        <a href=\"problems/many/dtlz.html\">DTLZ</a>,\n                        WFG\n                        <br>\n                        <p style=\"margin-bottom:0.7em;\"></p>\n\n                        <b>Constrained:</b>\n                        CTP,\n                        <a href=\"problems/constrained/dascmop.html\">DASCMOP</a>,\n                        <a href=\"problems/constrained/modact.html\">MODAct</a>,\n                        <a href=\"problems/constrained/mw.html\">MW</a>,\n                        CDTLZ\n                        <br>\n                        <p style=\"margin-bottom:0.4em;\"></p>\n\n                        <b>Dynamic:</b>\n                        <a href=\"problems/dynamic/df.html\">DF</a>\n                        \n                        <br>\n                        <p style=\"margin-bottom:0.7em;\"></p>\n                        \n\n                        <b>Related:</b>\n                        <a href=\"problems/definition.html\">Problem Definition</a>,\n                        <a href=\"gradients/index.html\">Gradients</a>,\n                        <a href=\"problems/parallelization.html\">Parallelization</a>\n                    </div>\n                </div>\n            </div>\n        </div>\n\n\n    </div>\n\n\n    <div class=\"row row-eq-height\">\n\n        <div class=\"col-12 col-lg-6 p-1\"onclick=\"location.href='algorithms/index.html';\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-search fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a href=\"algorithms/index.html\">Algorithms</a></p>\n\n                        <b>Single-objective:</b>\n                        <a href=\"algorithms/soo/ga.html\">GA</a>,\n                        <a href=\"algorithms/soo/de.html\">DE</a>,\n                        <a href=\"algorithms/soo/pso.html\">PSO</a>,\n                        <a href=\"algorithms/soo/nelder.html\">Nelder Mead</a>,\n                        <a href=\"algorithms/soo/pattern.html\">Pattern Search</a>,\n                        <a href=\"algorithms/soo/brkga.html\">BRKGA</a>,\n                        <a href=\"algorithms/soo/es.html\">ES</a>,\n                        <a href=\"algorithms/soo/sres.html\">SRES</a>,\n                        <a href=\"algorithms/soo/isres.html\">ISRES</a>,\n                        <a href=\"algorithms/soo/cmaes.html\">CMA-ES</a>,\n                        <a href=\"algorithms/soo/g3pcx.html\">G3PCX</a><img class=\"new-flag\" src=\"_static/new-flag.svg\">\n\n                        <p style=\"margin-bottom:0.4em;\"></p>\n                        <p style=\"margin-bottom:0.4em;\"></p>\n\n                        <b>Multi-objective:</b>\n                        <a href=\"algorithms/moo/nsga2.html\">NSGA-II</a>,\n                        <a href=\"algorithms/moo/rnsga2.html\">R-NSGA-II</a>\n\n                        <br>\n                        <p style=\"margin-bottom:0.4em;\"></p>\n\n                        <b>Many-objective:</b>\n                        <a href=\"algorithms/moo/nsga3.html\">NSGA-III</a>,\n                        <a href=\"algorithms/moo/rnsga3.html\">R-NSGA-III</a>,\n                        <a href=\"algorithms/moo/unsga3.html\">U-NSGA-III</a>,\n                        <a href=\"algorithms/moo/moead.html\">MOEA/D</a>,\n                        <a href=\"algorithms/moo/age.html\">AGE-MOEA</a>,\n                        <a href=\"algorithms/moo/age2.html\">AGE-MOEA2</a>,\n                        <a href=\"algorithms/moo/rvea.html\">RVEA</a>,\n                        <a href=\"algorithms/moo/sms.html\">SMS-EMOA</a>\n                        <br>\n                        \n                        <b>Dynamic:</b>\n                        <a href=\"algorithms/moo/dnsga2.html\">D-NSGA-II</a>,\n                        <a href=\"algorithms/moo/kgb.html\">KGB</a><img class=\"new-flag\" src=\"_static/new-flag.svg\">\n                        <br>\n                        <p style=\"margin-bottom:0.7em;\"></p>\n\n                        <b>Related:</b>\n                        <a href=\"misc/reference_directions.html\">Reference Directions</a>,\n                        <a href=\"constraints/index.html\">Constraints</a>,\n                        <a href=\"misc/convergence.html\">Convergence</a>,\n                        <a href=\"algorithms/hyperparameters.html\">Hyperparameters</a><img class=\"new-flag\" src=\"_static/new-flag.svg\">\n\n\n                    </div>\n                </div>\n            </div>\n        </div>\n\n        <div class=\"col-12 col-lg-6 p-1\" onclick=\"location.href='customization/index.html';\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-book-open fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a href=\"customization/index.html\">Customization</a></p>\n\n                        <b>Variable Types:</b>\n                        <a href=\"customization/binary.html\">Binary</a>,\n                        <a href=\"customization/discrete.html\">Discrete</a>,\n                        <a href=\"customization/permutation.html\">Permutation</a>,\n                        <a href=\"customization/mixed.html\">Mixed</a><img class=\"new-flag\" src=\"_static/new-flag.svg\">,\n                        <a href=\"customization/custom.html\">Custom</a>\n\n                        <br>\n                        <p style=\"margin-bottom:0.4em;\"></p>\n\n                        <b>Examples:</b>\n                        <a href=\"customization/initialization.html\">Biased Initialization</a>,\n                        <a href=\"customization/permutation.html#Traveling-Salesman-Problem-(TSP)\">Traveling Salesman</a>\n                    </div>\n\n                </div>\n            </div>\n        </div>\n\n\n    </div>\n\n\n    <div class=\"row row-eq-height\">\n\n        <div class=\"col-12 col-lg-6 p-1\"onclick=\"location.href='operators/index.html';\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n\n\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-tools fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a href=\"operators/index.html\">Operators</a></p>\n\n\n                        <a href=\"operators/sampling.html\">Sampling:</a>\n                        Random, LHS\n                        </br>\n                        <a href=\"operators/selection.html\">Selection:</a>\n                        Random, Binary Tournament\n\n                        </br>\n                        <a href=\"operators/crossover.html\">Crossover:</a>\n                        SBX, UX, HUX, DE Point, Exponential, OX, ERX\n                        </br>\n\n                        <a href=\"operators/mutation.html\">Mutation:</a>\n                        Polynomial, Bitflip, Inverse Mutation\n                        </br>\n                        <a href=\"operators/repair.html\">Repair</a>\n\n                    </div>\n\n                </div>\n            </div>\n        </div>\n\n        <div class=\"col-12 col-lg-6 p-1\" onclick=\"location.href='visualization/index.html';\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-chart-line fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a href=\"visualization/index.html\">Visualization</a></p>\n\n                        <a href=\"visualization/scatter.html\">Scatter Plot (2D/3D/ND)</a>,\n                        <a href=\"visualization/pcp.html\">Parallel Coordinate Plot (PCP) </a>,\n                        <a href=\"visualization/radviz.html\">Radviz</a>,\n                        <a href=\"visualization/star.html\">Star Coordinates</a>,\n                        <a href=\"visualization/heatmap.html\">Heatmap</a>,\n                        <a href=\"visualization/petal.html\">Petal Diagram</a>,\n                        <a href=\"visualization/radar.html\">Spider Web / Radar</a>,\n                        <a href=\"visualization/video.html\">Video</a>\n\n                    </div>\n                </div>\n            </div>\n        </div>\n\n\n    </div>\n\n\n    <div class=\"row row-eq-height\">\n\n        <div class=\"col-12 col-lg-6 p-1\" onclick=\"location.href='mcdm/index.html';\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-balance-scale fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a href=\"mcdm/index.html\">Multi-Criteria Decision Making</a></p>\n\n                        <a href=\"mcdm/index.html#nb-compromise\">Compromise Programming</a>,\n                        <a href=\"mcdm/index.html#nb-pseudo-weights\">Pseudo Weights</a>,\n                        <a href=\"mcdm/index.html#nb-high-tradeoff\">High Trade-off Points</a>\n                    </div>\n                </div>\n            </div>\n        </div>\n\n\n        <div class=\"col-12 col-lg-6 p-1\" onclick=\"location.href='misc/indicators.html';\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-medal fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a href=\"misc/indicators.html\">Performance Indicator</a></p>\n\n                        <a href=\"misc/indicators.html#nb-gd\">GD</a>,\n                        <a href=\"misc/indicators.html#nb-gd-plus\">GD+</a>,\n                        <a href=\"misc/indicators.html#nb-igd\">IGD</a>,\n                        <a href=\"misc/indicators.html#nb-igd-plus\">IGD+</a>,\n                        <a href=\"misc/indicators.html#nb-hv\">Hypervolume</a>,\n                        <a href=\"misc/kktpm.html\">KKTPM</a>\n                    </div>\n                </div>\n            </div>\n        </div>\n\n    </div>\n\n        <div class=\"row row-eq-height\">\n\n        <div class=\"col-12 col-lg-6 p-1\" onclick=\"location.href='misc/decomposition.html';\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-layer-group fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a href=\"misc/decomposition.html\">Decomposition</a></p>\n\n                        <a href=\"misc/decomposition.html#nb-weighted-sum\">Weighted-Sum</a>,\n                        <a href=\"misc/decomposition.html#nb-asf\">ASF</a>,\n                        <a href=\"misc/decomposition.html#nb-aasf\">AASF</a>,\n                        <a href=\"misc/decomposition.html#nb-tchebi\">Tchebysheff</a>,\n                        <a href=\"misc/decomposition.html#nb-pbi\">PBI</a>\n                    </div>\n                </div>\n            </div>\n        </div>\n\n\n        <div class=\"col-12 col-lg-6 p-1\" onclick=\"location.href='case_studies/index.html';\">\n            <div class=\"entry border p-3\">\n                <div class=\"d-flex flex-row\">\n                    <div class=\"icon col-2\">\n                        <i class=\"fas fa-business-time fa-2x\"></i>\n                    </div>\n                    <div class=\"desc col-10\">\n                        <p class=\"portfolio-heading\"><a href=\"case_studies/index.html\">Case Studies</a></p>\n                        <a href=\"case_studies/subset_selection.html\">Subset Selection</a>,\n                        <a href=\"case_studies/portfolio_allocation.html\">Portfolio Allocation</a><img class=\"new-flag\" src=\"_static/new-flag.svg\">\n                        \n\n                    </div>\n                </div>\n            </div>\n        </div>\n\n    </div>\n\n\n\n\n\n</div>"
  },
  {
    "path": "docs/source/index.rst",
    "content": ".. TransOPT documentation master file, created by\n   sphinx-quickstart on Mon Aug 19 16:00:09 2024.\n   You can adapt this file completely to your liking, but it should at least\n   contain the root `toctree` directive.\n\n.. _home:\n\n\nTRANSOPT: Transfer Optimization System for Bayesian Optimization Using Transfer Learning\n========================================================================================\nTransOPT is an open-source software platform designed to facilitate the design, benchmarking, and application of transfer learning for Bayesian optimization (TLBO) algorithms through a modular, data-centric framework.\n\n.. raw:: html\n   :file: home/guide.html\n\n\nVideo Demonstration\n********************************************************************************\nWatch the following video for a quick overview of TransOPT's capabilities:\n\n.. raw:: html\n\n   <iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/8l25_6fArxY?si=7WunSY06lrQNbkkb\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen></iframe>\n\n\nFeatures\n********************************************************************************\nTransOPT offers diverse features covering various aspects of transfer optimization.\n\n.. raw:: html\n   :file: home/feature.html\n\n\n\nContents\n********************************************************************************\n\n.. toctree::\n   :maxdepth: 2\n\n   installation\n   quickstart\n   usage/algorithms\n   usage/problems\n   usage/results\n   usage/data_manage\n   usage/visualization\n   usage/cli\n   development/architecture\n   development/api_reference\n   faq\n\n\n\n\nContact\n********************************************************************************\n| **Peili Mao**  \n| *University of Electronic Science and Technology of China*  \n| *Department of Computer Science*  \n| **E-mail**:  \n| peili.z.mao@gmail.com\n\n\n\nCite\n********************************************************************************\n\nIf you have utilized our framework for research purposes, we kindly invite you to cite our publication as follows:\n\nBibTex:\n\n.. code-block:: bibtex\n\n    @ARTICLE{TransOPT,\n      title = {{TransOPT}: Transfer Optimization System for Bayesian Optimization Using Transfer Learning},\n      author = {Author Name and Collaborator Name},\n      url = {https://github.com/maopl/TransOPT},\n      year = {2024}\n    }\n\n\n\n"
  },
  {
    "path": "docs/source/installation.rst",
    "content": "Installation Guide\n==================\n\nThis section will guide you through the steps required to install TransOPT on your system.\n\nBefore installing, ensure you have the following installed:\n\n- Python 3.10\n- Node.js 17.9.1\n- npm 8.11.0\n\n1. Clone the repository:\n\n   .. code-block:: console \n\n     $ git clone https://github.com/maopl/TransOpt.git\n\n2. Install the required dependencies:\n\n   .. code-block:: console \n\n     $ cd TransOpt\n     $ python setup.py install\n\n\n3. Install the frontend dependencies:\n\n   .. code-block:: console \n\n     $ cd webui && npm install\n\n4. (Optional) Install additional extensions:\n\n   You can enhance the functionality of the system by installing the following optional packages:\n\n   - **Extension 1**: Provides advanced results analysis.\n\n     .. code-block:: console \n\n       $ python setup.py install[analysis]\n\n   - **Extension 2**: Adds support for distributed computing.\n\n     .. code-block:: console \n\n       $ python setup.py install[remote]\n\n\n5. (Optional) Install optional Docker containers:\n\n   The following Docker containers are available to provide additional problem generators:\n\n   - **Inverse RNA Design**: Provides inverse RNA design problem generators:\n\n     .. code-block:: console \n\n       $ bash scripts/init_docker.sh\n\n   - **Protein Design**: Adds support for distributed computing.\n\n     .. code-block:: console \n\n       $ bash scripts/init_csstuning.sh\n\n   - **Configurable Software Tuning**: Enables integration with external APIs.\n\n     .. code-block:: console \n\n       $ bash scripts/init_csstuning.sh"
  },
  {
    "path": "docs/source/quickstart.rst",
    "content": "Quick Start\n======================\n\nTransOPT is a sophisticated system designed to facilitate transfer optimization services and experiments. It is composed of two parts: the agent and the web user interface. The agent is responsible for running the optimization algorithms and the web user interface is responsible for displaying the results and managing the experiments.\n\nStart the backend agent:\n\n.. code-block:: console \n\n  $ python transopt/agent/app.py\n\n\n\nWeb User Interface Mode\n-----------------------\nWhen TransOPT has been started successfully, go to the directory of webui and start the webui on your local machine. Enable the user interface mode with the following command:\n\n.. code-block:: console \n\n  $ cd webui && npm start\n\n\n\nCommand Line Mode\n-----------------\n\nIn addition to the web UI mode, TransOPT also offers a Command Line (CMD) mode for users who may not have access to a display screen, such as when working on a remote server.\n\nTo run TransOPT in CMD mode, use the following command:\n\n.. code-block:: console \n\n  $ python transopt/agent/run_cli.py -n MyTask -v 3 -o 2 -m RF -acf UCB -b 300\n\nThis command sets up a task named `MyTask` with 3 variables and 2 objectives, using a Random Forest model (`RF`) and the Upper Confidence Bound (`UCB`) acquisition function, with a budget of 300 function evaluations.\n\nFor a complete list of available options and more detailed usage instructions, please refer to the :ref:`CLI documentation <command_line_usage>`.\n"
  },
  {
    "path": "docs/source/usage/TOS.bib",
    "content": "%!BibTeX\n\n@article{QureshiIGKWUHLYA23,\n    author       = {Rizwan Qureshi and\n                    Muhammad Irfan and\n                    Taimoor Muzaffar Gondal and\n                    Sheheryar Khan and\n                    Jia Wu and\n                    Muhammad Usman Hadi and\n                    John Heymach and\n                    Xiuning Le and\n                    Hong Yan and\n                    Tanvir Alam},\n    title        = {AI in drug discovery and its clinical relevance},\n    journal      = {Heliyon.},\n    volume       = {9},\n    number       = {7},\n    pages        = {e17575},\n    year         = {2023}\n}\n\n@article{jomaasg21,\n    author       = {hadi s. jomaa and\n                    lars schmidt{-}thieme and\n                    josif grabocka},\n    title        = {dataset2vec: learning dataset meta-features},\n    journal      = {data min. knowl. discov.},\n    volume       = {35},\n    number       = {3},\n    pages        = {964--985},\n    year         = {2021},\n    timestamp    = {tue, 07 may 2024 20:27:49 +0200},\n    biburl       = {https://dblp.org/rec/journals/datamine/jomaasg21.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{MittasA13,\n    author    = {Nikolaos Mittas and\n                 Lefteris Angelis},\n    title     = {Ranking and Clustering Software Cost Estimation Models through a Multiple Comparisons Algorithm},\n    journal   = {{IEEE} Trans. Software Eng.},\n    volume    = {39},\n    number    = {4},\n    pages     = {537--551},\n    year      = {2013},\n}\n\n@inproceedings{MirandaYWK22,\n    author       = {Brando Miranda and\n                    Patrick Yu and\n                    Yu-Xiong Wang and\n                    Sanmi Koyejo},\n    title        = {The Curse of Low Task Diversity: On the Failure of Transfer Learning to Outperform MAML and their Empirical Equivalence},\n    booktitle    = {NeurIPS 2022 Workshop on MetaLearn},\n    pages        = {770--778},\n    year         = {2022}\n}\n\n@inproceedings{ZhangH24,\n    author       = {Guanhua Zhang and\n                    Moritz Hardt},\n    title        = {Inherent Trade-Offs between Diversity and Stability in Multi-Task Benchmarks},\n    booktitle    = {ICML'24: Proc. of the 41st International Conference on Machine Learning},\n    year         = {2024},\n    note         = {accepted for publication}\n}\n\n@inproceedings{TripuraneniJJ20,\n    author       = {Nilesh Tripuraneni and\n                    Michael I. Jordan and\n                    Chi Jin},\n    title        = {On the Theory of Transfer Learning: The Importance of Task Diversity},\n    booktitle    = {NeurIPS'20: Proc. of the 33rd Annual Conference on Neural Information Processing Systems},\n    year         = {2020},\n    timestamp    = {Wed, 07 Dec 2022 22:58:55 +0100},\n    biburl       = {https://dblp.org/rec/conf/nips/TripuraneniJJ20.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{FeurerSH14,\n    author       = {Matthias Feurer and\n                    Jost Tobias Springenberg and\n                    Frank Hutter},\n    title        = {Using Meta-Learning to Initialize {Bayesian} Optimization of Hyperparameters},\n    booktitle    = {Proc. of the International Workshop on Meta-learning and Algorithm Selection colocated with 21st European Conference on Artificial Intelligence},\n    series       = {{CEUR} Workshop Proceedings},\n    volume       = {1201},\n    pages        = {3--10},\n    publisher    = {CEUR-WS.org},\n    year         = {2014},\n    timestamp    = {Fri, 10 Mar 2023 16:22:14 +0100},\n    biburl       = {https://dblp.org/rec/conf/ecai/FeurerSH14.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{HeZRS16,\n    author       = {Kaiming He and\n                    Xiangyu Zhang and\n                    Shaoqing Ren and\n                    Jian Sun},\n    title        = {Deep Residual Learning for Image Recognition},\n    booktitle    = {CVPR'16: Proc. of 2016 {IEEE} Conference on Computer Vision and Pattern Recognition},\n    pages        = {770--778},\n    publisher    = {{IEEE} Computer Society},\n    year         = {2016},\n    timestamp    = {Fri, 24 Mar 2023 00:02:57 +0100},\n    biburl       = {https://dblp.org/rec/conf/cvpr/HeZRS16.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{BrownMRSKDNSSAA20,\n    author       = {Tom B. Brown and\n                    Benjamin Mann and\n                    Nick Ryder and\n                    Melanie Subbiah and\n                    Jared Kaplan and\n                    Prafulla Dhariwal and\n                    Arvind Neelakantan and\n                    Pranav Shyam and\n                    Girish Sastry and\n                    Amanda Askell and\n                    Sandhini Agarwal and\n                    Ariel Herbert{-}Voss and\n                    Gretchen Krueger and\n                    Tom Henighan and\n                    Rewon Child and\n                    Aditya Ramesh and\n                    Daniel M. Ziegler and\n                    Jeffrey Wu and\n                    Clemens Winter and\n                    Christopher Hesse and\n                    Mark Chen and\n                    Eric Sigler and\n                    Mateusz Litwin and\n                    Scott Gray and\n                    Benjamin Chess and\n                    Jack Clark and\n                    Christopher Berner and\n                    Sam McCandlish and\n                    Alec Radford and\n                    Ilya Sutskever and\n                    Dario Amodei},\n    title        = {Language Models are Few-Shot Learners},\n    booktitle    = {NeurIPS'20: Proc. of the 33rd Annual Conference on Neural Information Processing Systems},\n    year         = {2020},\n    timestamp    = {Thu, 25 May 2023 10:38:31 +0200},\n    biburl       = {https://dblp.org/rec/conf/nips/BrownMRSKDNSSAA20.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{ShahriariSWAF16,\n    author       = {Bobak Shahriari and\n                    Kevin Swersky and\n                    Ziyu Wang and\n                    Ryan P. Adams and\n                    Nando de Freitas},\n    title        = {Taking the Human Out of the Loop: {A} Review of {Bayesian} Optimization},\n    journal      = {Proc. {IEEE}},\n    volume       = {104},\n    number       = {1},\n    pages        = {148--175},\n    year         = {2016},\n    timestamp    = {Fri, 02 Oct 2020 14:42:23 +0200},\n    biburl       = {https://dblp.org/rec/journals/pieee/ShahriariSWAF16.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{Frazier18,\n    author       = {Peter I. Frazier},\n    title        = {A Tutorial on {Bayesian} Optimization},\n    journal      = {CoRR},\n    volume       = {abs/1807.02811},\n    year         = {2018},\n    url          = {http://arxiv.org/abs/1807.02811},\n    eprinttype   = {arXiv},\n    eprint       = {1807.02811},\n    timestamp    = {Mon, 13 Aug 2018 16:48:03 +0200},\n    biburl       = {https://dblp.org/rec/journals/corr/abs-1807-02811.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{vazquez24,\n  title={De novo design of high-affinity binders of bioactive helical peptides},\n  author={V{\\'a}zquez Torres, Susana and Leung, Philip JY and Venkatesh, Preetham and Lutz, Isaac D and Hink, Fabian and Huynh, Huu-Hien and Becker, Jessica and Yeh, Andy Hsien-Wei and Juergens, David and Bennett, Nathaniel R and others},\n  journal={Nature},\n  volume={626},\n  number={7998},\n  pages={435--442},\n  year={2024},\n  publisher={Nature Publishing Group UK London}\n}\n\n\n@inproceedings{SnoekLA12,\n    author       = {Jasper Snoek and\n                    Hugo Larochelle and\n                    Ryan P. Adams},\n    title        = {Practical {Bayesian} Optimization of Machine Learning Algorithms},\n    booktitle    = {NIPS'12: Proc. of the 26th Annual Conference on Neural Information Processing Systems},\n    pages        = {2960--2968},\n    year         = {2012},\n    timestamp    = {Mon, 16 May 2022 15:41:51 +0200},\n    biburl       = {https://dblp.org/rec/conf/nips/SnoekLA12.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{LiXKZ20,\n  author       = {Shibo Li and\n                  Wei W. Xing and\n                  Robert M. Kirby and\n                  Shandian Zhe},\n  title        = {Multi-Fidelity Bayesian Optimization via Deep Neural Networks},\n  booktitle    = {NIPS'20: The Proc. of the 33th Advances in Neural Information Processing Systems},\n  year         = {2020},\n  timestamp    = {Sun, 19 Mar 2023 20:50:17 +0100},\n  biburl       = {https://dblp.org/rec/conf/nips/LiXKZ20.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{hansen2010,\n  title={Comparing results of 31 algorithms from the black-box optimization benchmarking BBOB-2009},\n  author={Hansen, Nikolaus and Auger, Anne and Ros, Raymond and Finck, Steffen and Po{\\v{s}}{\\'\\i}k, Petr},\n  booktitle={Proceedings of the 12th annual conference companion on Genetic and evolutionary computation},\n  pages={1689--1696},\n  year={2010}\n}\n\n\n@inproceedings{SegreraPM08,\n  author       = {Saddys Segrera and\n                  Joel Pinho Lucas and\n                  Mar{\\'{\\i}}a N. Moreno Garc{\\'{\\i}}a},\n  title        = {Information-Theoretic Measures for Meta-learning},\n  booktitle    = {HAIS‘08: Proc. of the 2008 Hybrid Artificial Intelligence Systems, Third International Workshop},\n  series       = {Lecture Notes in Computer Science},\n  volume       = {5271},\n  pages        = {458--465},\n  publisher    = {Springer},\n  year         = {2008},\n  timestamp    = {Tue, 14 May 2019 10:00:51 +0200},\n  biburl       = {https://dblp.org/rec/conf/hais/SegreraPM08.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{Ho95,\n  author       = {Tin Kam Ho},\n  title        = {Random decision forests},\n  booktitle    = {ICDAR'95: Proc. of the Third International Conference on Document Analysis and Recognition},\n  pages        = {278--282},\n  publisher    = {{IEEE} Computer Society},\n  year         = {1995},\n  timestamp    = {Fri, 24 Mar 2023 00:05:08 +0100},\n  biburl       = {https://dblp.org/rec/conf/icdar/Ho95.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{HuangLMW17,\n  author       = {Gao Huang and\n                  Zhuang Liu and\n                  Laurens van der Maaten and\n                  Kilian Q. Weinberger},\n  title        = {Densely Connected Convolutional Networks},\n  booktitle    = {CVPR'17: Proc. of the 2017 {IEEE} Conference on Computer Vision and Pattern Recognition},\n  pages        = {2261--2269},\n  publisher    = {{IEEE} Computer Society},\n  year         = {2017},\n  timestamp    = {Mon, 28 Aug 2023 21:17:39 +0200},\n  biburl       = {https://dblp.org/rec/conf/cvpr/HuangLMW17.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n\n@inproceedings{FeurerKESBH15,\n    author       = {Matthias Feurer and\n                    Aaron Klein and\n                    Katharina Eggensperger and\n                    Jost Tobias Springenberg and\n                    Manuel Blum and\n                    Frank Hutter},\n    title        = {Efficient and Robust Automated Machine Learning},\n    booktitle    = {NIPS'15: Proc. of the 28th Annual Conference on Neural Information Processing Systems},\n    pages        = {2962--2970},\n    year         = {2015},\n    timestamp    = {Mon, 16 May 2022 15:41:51 +0200},\n    biburl       = {https://dblp.org/rec/conf/nips/FeurerKESBH15.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{Cowen-RiversLTW22,\n    author       = {Alexander I. Cowen{-}Rivers and\n                    Wenlong Lyu and\n                    Rasul Tutunov and\n                    Zhi Wang and\n                    Antoine Grosnit and\n                    Ryan{-}Rhys Griffiths and\n                    Alexandre Max Maraval and\n                    Jianye Hao and\n                    Jun Wang and\n                    Jan Peters and\n                    Haitham Bou{-}Ammar},\n    title        = {{HEBO:} {An} Empirical Study of Assumptions in {Bayesian} Optimisation},\n    journal      = {J. Artif. Intell. Res.},\n    volume       = {74},\n    pages        = {1269--1349},\n    year         = {2022},\n    timestamp    = {Mon, 28 Aug 2023 21:18:41 +0200},\n    biburl       = {https://dblp.org/rec/journals/jair/Cowen-RiversLTW22.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n@article{ZhuangQDXZZXH21,\n  author       = {Fuzhen Zhuang and\n                  Zhiyuan Qi and\n                  Keyu Duan and\n                  Dongbo Xi and\n                  Yongchun Zhu and\n                  Hengshu Zhu and\n                  Hui Xiong and\n                  Qing He},\n  title        = {A Comprehensive Survey on Transfer Learning},\n  journal      = {Proc. {IEEE}},\n  volume       = {109},\n  number       = {1},\n  pages        = {43--76},\n  year         = {2021},\n  timestamp    = {Mon, 26 Jun 2023 20:52:19 +0200},\n  biburl       = {https://dblp.org/rec/journals/pieee/ZhuangQDXZZXH21.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{BaiLSZZC23,\n    author       = {Tianyi Bai and\n                    Yang Li and\n                    Yu Shen and\n                    Xinyi Zhang and\n                    Wentao Zhang and\n                    Bin Cui},\n    title        = {Transfer Learning for {Bayesian} Optimization: {A} Survey},\n    journal      = {CoRR},\n    volume       = {abs/2302.05927},\n    year         = {2023},\n    url          = {https://doi.org/10.48550/arXiv.2302.05927},\n    doi          = {10.48550/ARXIV.2302.05927},\n    eprinttype   = {arXiv},\n    eprint       = {2302.05927},\n    timestamp    = {Wed, 01 Mar 2023 21:16:31 +0100},\n    biburl       = {https://dblp.org/rec/journals/corr/abs-2302-05927.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{CortesV95,\n  author       = {Corinna Cortes and\n                  Vladimir Vapnik},\n  title        = {Support-Vector Networks},\n  journal      = {Mach. Learn.},\n  volume       = {20},\n  number       = {3},\n  pages        = {273--297},\n  year         = {1995},\n  timestamp    = {Mon, 02 Mar 2020 16:28:45 +0100},\n  biburl       = {https://dblp.org/rec/journals/ml/CortesV95.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{KrizhevskySH12,\n  author       = {Alex Krizhevsky and\n                  Ilya Sutskever and\n                  Geoffrey E. Hinton},\n  title        = {ImageNet Classification with Deep Convolutional Neural Networks},\n  booktitle    = {NIPS:'12: Proc. of the 26th Annual\n                  Conference on Neural Information Processing Systems},\n  pages        = {1106--1114},\n  year         = {2012},\n  timestamp    = {Mon, 16 May 2022 15:41:51 +0200},\n  biburl       = {https://dblp.org/rec/conf/nips/KrizhevskySH12.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{ChenG16,\n  author       = {Tianqi Chen and\n                  Carlos Guestrin},\n  title        = {XGBoost: {A} Scalable Tree Boosting System},\n  booktitle    = {KDD'16: Proc. of the 22nd {ACM} {SIGKDD} International Conference on Knowledge Discovery and Data Mining,},\n  pages        = {785--794},\n  publisher    = {{ACM}},\n  year         = {2016},\n  timestamp    = {Sat, 17 Dec 2022 01:15:30 +0100},\n  biburl       = {https://dblp.org/rec/conf/kdd/ChenG16.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@article{VanschorenRBT13,\n  author       = {Joaquin Vanschoren and\n                  Jan N. van Rijn and\n                  Bernd Bischl and\n                  Lu{\\'{\\i}}s Torgo},\n  title        = {OpenML: networked science in machine learning},\n  journal      = {{SIGKDD} Explor.},\n  volume       = {15},\n  number       = {2},\n  pages        = {49--60},\n  year         = {2013},\n  timestamp    = {Tue, 29 Sep 2020 10:56:50 +0200},\n  biburl       = {https://dblp.org/rec/journals/sigkdd/VanschorenRBT13.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{BombarelliWDHSSAHAA18,\n    author       = {Rafael G\\'omez-Bombarelli and\n                    Jennifer N. Wei and\n                    David Duvenaud and\n                    Jos\\'e Miguel Hern\\'andez-Lobato and\n                    Benjam\\'in S\\'anchez-Lengeling and\n                    Dennis Sheberla and\n                    Jorge Aguilera-Iparraguirre and\n                    Timothy D. Hirzel and\n                    Ryan P. Adams and\n                    Al\\'an Aspuru-Guzik},\n    title        = {Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules},\n    journal      = {ACS Cent. Sci.},\n    volume       = {4},\n    number       = {2},\n    pages        = {268--276},\n    year         = {2018}\n}\n\n@inproceedings{KorovinaXKNPSX20,\n    author       = {Ksenia Korovina and\n                    Sailun Xu and\n                    Kirthevasan Kandasamy and\n                    Willie Neiswanger and\n                    Barnab{\\'{a}}s P{\\'{o}}czos and\n                    Jeff Schneider and\n                    Eric P. Xing},\n    title        = {{ChemBO}: {Bayesian} Optimization of Small Organic Molecules with Synthesizable Recommendations},\n    booktitle    = {AISTAS'20: Proc. of the 23rd International Conference on Artificial Intelligence and Statistics},\n    series       = {Proceedings of Machine Learning Research},\n    volume       = {108},\n    pages        = {3393--3403},\n    publisher    = {{PMLR}},\n    year         = {2020},\n    timestamp    = {Tue, 17 Nov 2020 16:08:03 +0100},\n    biburl       = {https://dblp.org/rec/conf/aistats/KorovinaXKNPSX20.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{HaseRKA18,\n    author       = {Florian H\\\"ase and\n                    Lo\\\"ic M Roch and\n                    Christoph Kreisbeck and\n                    Al\\'an Aspuru-Guzik},\n    title        = {Phoenics: {A} {Bayesian} Optimizer for Chemistry},\n    journal      = {ACS Cent Sci},\n    volume       = {4},\n    issue        = {9},\n    pages        = {1134--1145},\n    year         = {2018}\n}\n\n@article{WangHXLHLXT23,\n    author       = {Xiaoqian Wang and\n                    Yang Huang and\n                    Xiaoyu Xie and\n                    Yan Liu and\n                    Ziyu Huo and\n                    Maverick Lin and\n                    Hongliang Xin and\n                    Rong Tong},\n    title        = {Bayesian-optimization-assisted discovery of stereoselective aluminum complexes for ring-opening polymerization of racemic lactide},\n    journal      = {Nat. Commun.},\n    volume       = {14},\n    issue        = {3647},\n    year         = {2023},\n    pages        = {1--11}\n}\n\n@article{RaoG24,\n    author       = {Anish Rao and\n                    Marek Grzelczak},\n    title        = {Revisiting {El-Sayed} Synthesis: {Bayesian} Optimization for Revealing New Insights during the Growth of Gold Nanorods},\n    journal      = {Chem. Mater.},\n    volume       = {36},\n    issue        = {5},\n    year         = {2024},\n    pages        = {2577--2587}\n}\n\n\n\n\n\n\n\n\n@article{ShieldsSLPDAJAD21,\n    author       = {Benjamin J. Shields and\n                    Jason Stevens and\n                    Jun Li and\n                    Marvin Parasram and\n                    Farhan Damani and\n                    Jesus I. Martinez Alvarado and\n                    Jacob M. Janey and\n                    Ryan P. Adams and\n                    Abigail G. Doyle},\n    title        = {Bayesian reaction optimization as a tool for chemical synthesis},\n    journal      = {Nature},\n    volume       = {590},\n    year         = {2021},\n    pages        = {89--96}\n}\n\n@Inbook{Frazier2016,\n    author    = {Peter I. Frazier and\n                 Jialei Wang},\n    editor    = {Turab Lookman and\n                 Francis J. Alexander and\n                 Krishna Rajan},\n    title     = {Bayesian Optimization for Materials Design},\n    bookTitle = {Information Science for Materials Discovery and Design},\n    year      = {2016},\n    publisher = {Springer International Publishing},\n    address   = {Cham},\n    pages     = {45--75},\n    isbn      = {978-3-319-23871-5}\n}\n\n@book{Packwood17,\n    author    = {Daniel Packwood},\n    title     = {Bayesian Optimization for Materials Science},\n    publisher = {Springer Singapore},\n    year      = {2017},\n    month     = {October}\n}\n\n@article{ZhangAC20,\n    author       = {Yichi Zhang and\n                    Daniel W. Apley and\n                    Wei Chen},\n    title        = {Bayesian Optimization for Materials Design with Mixed Quantitative and Qualitative Variables},\n    journal      = {Sci. Rep.},\n    volume       = {10},\n    issue        = {1},\n    year         = {2020},\n    pages        = {2045--2322}\n}\n\n@article{Barnes11,\n    author       = {Chris P. Barnes and\n                    Daniel Silk and\n                    Xia Sheng and\n                    Michael P. H. Stumpf },\n    title        = {Bayesian design of synthetic biological systems},\n    journal      = {PNAS},\n    volume       = {108},\n    issue        = {37},\n    year         = {2011},\n    pages        = {15190--15195}\n}\n\n@article{AraujoVS22,\n    author       = {Robyn P. Araujo and\n                    Sean T. Vittadello and\n                    Michael P. H. Stumpf},\n    title        = {Bayesian and Algebraic Strategies to Design in Synthetic Biology},\n    journal      = {Proc. {IEEE}},\n    volume       = {110},\n    number       = {5},\n    pages        = {675--687},\n    year         = {2022},\n    timestamp    = {Mon, 13 Jun 2022 20:57:05 +0200},\n    biburl       = {https://dblp.org/rec/journals/pieee/AraujoVS22.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{MerzbacherAO23,\n    author       = {Charlotte Merzbacher and\n                    Oisin Mac Aodha and\n                    Diego A. Oyarz\\'un},\n    title        = {Bayesian Optimization for Design of Multiscale Biological Circuits},\n    journal      = {ACS Synth. Biol.},\n    volume       = {12},\n    issue        = {7},\n    year         = {2023},\n    pages        = {2073--2082}\n}\n\n@article{HuangQZTLC23,\n    author       = {Shiyue Huang and\n                    Yanzhao Qin and\n                    Xinyi Zhang and\n                    Yaofeng Tu and\n                    Zhongliang Li and\n                    Bin Cui},\n    title        = {Survey on performance optimization for database systems},\n    journal      = {Sci. China Inf. Sci.},\n    volume       = {66},\n    number       = {2},\n    year         = {2023},\n    timestamp    = {Thu, 02 Mar 2023 13:59:22 +0100},\n    biburl       = {https://dblp.org/rec/journals/chinaf/HuangQZTLC23.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{HellstenSLLHEKS23,\n    author       = {Erik Orm Hellsten and\n                    Artur L. F. Souza and\n                    Johannes Lenfers and\n                    Rubens Lacouture and\n                    Olivia Hsu and\n                    Adel Ejjeh and\n                    Fredrik Kjolstad and\n                    Michel Steuwer and\n                    Kunle Olukotun and\n                    Luigi Nardi},\n    title        = {{BaCO}: {A} Fast and Portable {Bayesian} Compiler Optimization Framework},\n    booktitle    = {ASPLOS'23: Proc. of the 28th {ACM} International Conference on Architectural Support for Programming Languages and Operating Systems},\n    pages        = {19--42},\n    publisher    = {{ACM}},\n    year         = {2023},\n    timestamp    = {Sat, 10 Feb 2024 18:04:52 +0100},\n    biburl       = {https://dblp.org/rec/conf/asplos/HellstenSLLHEKS23.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{BanchhorS21,\n    author       = {Chitrakant Banchhor and\n                    N. Srinivasu},\n    title        = {Analysis of Bayesian optimization algorithms for big data classification\n        based on Map Reduce framework},\n    journal      = {J. Big Data},\n    volume       = {8},\n    number       = {1},\n    pages        = {81},\n    year         = {2021},\n    timestamp    = {Fri, 11 Jun 2021 17:01:34 +0200},\n    biburl       = {https://dblp.org/rec/journals/jbd/BanchhorS21.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{ShiltonGRV17,\n    author       = {Alistair Shilton and\n                    Sunil Gupta and\n                    Santu Rana and\n                    Svetha Venkatesh},\n    title        = {Regret Bounds for Transfer Learning in {Bayesian} Optimisation},\n    booktitle    = {AISTATS'17: Proc of the 2017 International Conference on Artificial Intelligence\n        and Statistics},\n    volume       = {54},\n    pages        = {307--315},\n    publisher    = {{PMLR}},\n    year         = {2017},\n    timestamp    = {Sun, 02 Oct 2022 15:54:22 +0200},\n    biburl       = {https://dblp.org/rec/conf/aistats/ShiltonGRV17.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{RamachandranGRV18,\n    author       = {Anil Ramachandran and\n                    Sunil Gupta and\n                    Santu Rana and\n                    Svetha Venkatesh},\n    title        = {Selecting Optimal Source for Transfer Learning in Bayesian Optimisation},\n    booktitle    = {PRICAI'18: Proc of the 2018 Trends in Artificial Intelligence - 15th Pacific Rim International Conference on Artificial Intelligence},\n    series       = {Lecture Notes in Computer Science},\n    volume       = {11012},\n    pages        = {42--56},\n    publisher    = {Springer},\n    year         = {2018},\n    timestamp    = {Mon, 04 Nov 2019 12:36:13 +0100},\n    biburl       = {https://dblp.org/rec/conf/pricai/RamachandranGRV18.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{TighineanuSBRBV22,\n    author       = {Petru Tighineanu and\n                    Kathrin Skubch and\n                    Paul Baireuther and\n                    Attila Reiss and\n                    Felix Berkenkamp and\n                    Julia Vinogradska},\n    title        = {Transfer Learning with {Gaussian} Processes for {Bayesian} Optimization},\n    booktitle    = {AISTATS'22: Procof the 2022 International Conference on Artificial Intelligence and Statistics},\n    volume       = {151},\n    pages        = {6152--6181},\n    publisher    = {{PMLR}},\n    year         = {2022},\n    timestamp    = {Fri, 20 May 2022 16:11:25 +0200},\n    biburl       = {https://dblp.org/rec/conf/aistats/TighineanuSBRBV22.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{SwerskySA13,\n    author       = {Kevin Swersky and\n                    Jasper Snoek and\n                    Ryan Prescott Adams},\n    title        = {Multi-Task {Bayesian} Optimization},\n    booktitle    = {NIPs'13: Proc of the 2013 Annual Conference on Neural Information Processing Systems},\n    pages        = {2004--2012},\n    year         = {2013},\n    timestamp    = {Mon, 16 May 2022 15:41:51 +0200},\n    biburl       = {https://dblp.org/rec/conf/nips/SwerskySA13.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{MossLR20,\n    author       = {Henry B. Moss and\n                    David S. Leslie and\n                    Paul Rayson},\n    title        = {{MUMBO:} MUlti-task Max-Value {Bayesian} Optimization},\n    booktitle    = {ECML/PKDD'20: Proc. of the 2020 European Conference on Machine Learning and Knowledge Discovery in Databases},\n    series       = {Lecture Notes in Computer Science},\n    volume       = {12459},\n    pages        = {447--462},\n    publisher    = {Springer},\n    year         = {2020},\n    timestamp    = {Tue, 21 Mar 2023 21:00:11 +0100},\n    biburl       = {https://dblp.org/rec/conf/pkdd/MossLR20.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{TaylorFWJGCJL23,\n    author    = {Connor J. Taylor and\n                 Kobi C. Felton and\n                 Daniel Wigh and\n                 Mohammed I. Jeraal and\n                 Rachel Grainger and\n                 Gianni Chessari and\n                 Christopher N. Johnson and\n                 Alexei A. Lapkin},\n    title     = {Accelerated Chemical Reaction Optimization Using Multi-Task Learning},\n    journal   = {ACS Cent. Sci.},\n    volume    = {9},\n    issue     = {5},\n    year      = {2023}\n}\n\n@inproceedings{VolppFFDFHD20,\n    author       = {Michael Volpp and\n                    Lukas P. Fr{\\\"{o}}hlich and\n                    Kirsten Fischer and\n                    Andreas Doerr and\n                    Stefan Falkner and\n                    Frank Hutter and\n                    Christian Daniel},\n    title        = {Meta-Learning Acquisition Functions for Transfer Learning in Bayesian\n        Optimization},\n    booktitle    = {ICLR'20: Proc. of the 8th International Conference on Learning Representations},\n    publisher    = {OpenReview.net},\n    year         = {2020},\n    timestamp    = {Thu, 14 Oct 2021 10:00:36 +0200},\n    biburl       = {https://dblp.org/rec/conf/iclr/VolppFFDFHD20.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{ZimmerLH21,\n    author       = {Lucas Zimmer and\n                    Marius Lindauer and\n                    Frank Hutter},\n    title        = {{Auto-Pytorch}: Multi-Fidelity MetaLearning for Efficient and Robust {AutoDL}},\n    journal      = {{IEEE} Trans. Pattern Anal. Mach. Intell.},\n    volume       = {43},\n    number       = {9},\n    pages        = {3079--3090},\n    year         = {2021},\n    timestamp    = {Wed, 16 Mar 2022 23:54:55 +0100},\n    biburl       = {https://dblp.org/rec/journals/pami/ZimmerLH21.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{FeurerEFLH22,\n    author       = {Matthias Feurer and\n                    Katharina Eggensperger and\n                    Stefan Falkner and\n                    Marius Lindauer and\n                    Frank Hutter},\n    title        = {Auto-Sklearn 2.0: Hands-free {AutoML} via Meta-Learning},\n    journal      = {J. Mach. Learn. Res.},\n    volume       = {23},\n    pages        = {261:1--261:61},\n    year         = {2022},\n    timestamp    = {Sun, 12 Nov 2023 02:20:14 +0100},\n    biburl       = {https://dblp.org/rec/journals/jmlr/FeurerEFLH22.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{FeurerSH15,\n  author       = {Matthias Feurer and\n                  Jost Tobias Springenberg and\n                  Frank Hutter},\n  title        = {Initializing Bayesian Hyperparameter Optimization via Meta-Learning},\n  booktitle    = {AAAI'15: Proc. of the 2015 {AAAI} Conference on Artificial Intelligence},\n  pages        = {1128--1135},\n  publisher    = {{AAAI} Press},\n  year         = {2015},\n  timestamp    = {Mon, 18 Sep 2023 11:22:44 +0200},\n  biburl       = {https://dblp.org/rec/conf/aaai/FeurerSH15.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{WistubaSS15a,\n  author       = {Martin Wistuba and\n                  Nicolas Schilling and\n                  Lars Schmidt{-}Thieme},\n  title        = {Learning hyperparameter optimization initializations},\n  booktitle    = {DSAA'15: Proc. of the 2015 {IEEE} International Conference on Data Science and Advanced\n                  Analytics},\n  pages        = {1--10},\n  publisher    = {{IEEE}},\n  year         = {2015},\n  timestamp    = {Wed, 16 Oct 2019 14:14:55 +0200},\n  biburl       = {https://dblp.org/rec/conf/dsaa/WistubaSS15.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{MaravelZGA23,\n    author       = {Alexandre Maraval and\n                    Matthieu Zimmer and\n                    Antoine Grosnit and\n                    Haitham Bou Ammar},\n    booktitle    = {NeurIPS'23: Proc. of 36th Advances in Neural Information Processing Systems},\n    title        = {End-to-End Meta-{Bayesian} Optimisation with Transformer Neural Processes},\n    volume       = {36},\n    pages        = {11246--11260},\n    year         = {2023}\n}\n\n@inproceedings{HsiehHL21,\n    author       = {Bing{-}Jing Hsieh and\n                    Ping{-}Chun Hsieh and\n                    Xi Liu},\n    title        = {Reinforced Few-Shot Acquisition Function Learning for {Bayesian} Optimization},\n    booktitle    = {NeurIPS'21: Proc. of the 34th Annual Conference on Neural Information Processing Systems},\n    pages        = {7718--7731},\n    year         = {2021},\n    timestamp    = {Tue, 03 May 2022 16:20:47 +0200},\n    biburl       = {https://dblp.org/rec/conf/nips/HsiehHL21.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{WistubaG21,\n    author       = {Martin Wistuba and\n                    Josif Grabocka},\n    title        = {Few-Shot Bayesian Optimization with Deep Kernel Surrogates},\n    booktitle    = {ICLR'21: Proc. of the 9th International Conference on Learning Representations},\n    publisher    = {OpenReview.net},\n    year         = {2021},\n    timestamp    = {Wed, 23 Jun 2021 17:36:40 +0200},\n    biburl       = {https://dblp.org/rec/conf/iclr/WistubaG21.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{MallikBHSJLNH23,\n    author       = {Neeratyoy Mallik and\n                    Edward Bergman and\n                    Carl Hvarfner and\n                    Danny Stoll and\n                    Maciej Janowski and\n                    Marius Lindauer and\n                    Luigi Nardi and\n                    Frank Hutter},\n    title        = {{PriorBand}: Practical Hyperparameter Optimization in the Age of Deep Learning},\n    booktitle    = {NeurIPS'23: Proc. of 36th Advances in Neural Information Processing Systems},\n    pages        = {7377--7391},\n    volume       = {36},\n    year         = {2023}\n}\n\n@inproceedings{BalandatKJDLWB20,\n  author       = {Maximilian Balandat and\n                  Brian Karrer and\n                  Daniel R. Jiang and\n                  Samuel Daulton and\n                  Benjamin Letham and\n                  Andrew Gordon Wilson and\n                  Eytan Bakshy},\n  title        = {{BoTorch}: {A} Framework for Efficient Monte-Carlo {Bayesian} Optimization},\n  booktitle    = {NeurIPS'20: Proc. of the 33rd Annual Conference on Neural Information Processing Systems},\n  year         = {2020},\n  timestamp    = {Tue, 19 Jan 2021 15:57:16 +0100},\n  biburl       = {https://dblp.org/rec/conf/nips/BalandatKJDLWB20.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{BergstraYC13,\n    author       = {James Bergstra and\n                    Dan Yamins and\n                    David D. Cox},\n    title        = {{HyperOpt}: {A} {Python} Library for Optimizing the Hyperparameters of Machine Learning Algorithms},\n    booktitle    = {SciPy'13: Proc of the 2013 Python in Science Conference},\n    pages        = {13--19},\n    publisher    = {scipy.org},\n    year         = {2013},\n    timestamp    = {Wed, 03 May 2023 17:10:38 +0200},\n    biburl       = {https://dblp.org/rec/conf/scipy/BergstraYC13.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{AkibaSYOK19,\n    author       = {Takuya Akiba and\n                    Shotaro Sano and\n                    Toshihiko Yanase and\n                    Takeru Ohta and\n                    Masanori Koyama},\n    title        = {Optuna: {A} Next-generation Hyperparameter Optimization Framework},\n    booktitle    = {KDD'19: Proc of the 2019 {ACM} {SIGKDD} International Conference on Knowledge Discovery {\\&} Data Mining},\n    pages        = {2623--2631},\n    publisher    = {{ACM}},\n    year         = {2019},\n    timestamp    = {Tue, 16 Aug 2022 23:04:27 +0200},\n    biburl       = {https://dblp.org/rec/conf/kdd/AkibaSYOK19.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{NardiSKO19,\n    author       = {Luigi Nardi and\n                    Artur L. F. Souza and\n                    David Koeplinger and\n                    Kunle Olukotun},\n    title        = {{HyperMapper}: a Practical Design Space Exploration Framework},\n    booktitle    = {MASCOTS'19: Proc of the 2019 {IEEE} International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems},\n    pages        = {425--426},\n    publisher    = {{IEEE} Computer Society},\n    year         = {2019},\n    timestamp    = {Thu, 09 Jul 2020 14:09:10 +0200},\n    biburl       = {https://dblp.org/rec/conf/mascots/NardiSKO19.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{RijnBTGUFWWBV13,\n    author       = {Jan N. van Rijn and\n                    Bernd Bischl and\n                    Lu{\\'{\\i}}s Torgo and\n                    Bo Gao and\n                    Venkatesh Umaashankar and\n                    Simon Fischer and\n                    Patrick Winter and\n                    Bernd Wiswedel and\n                    Michael R. Berthold and\n                    Joaquin Vanschoren},\n    title        = {{OpenML}: {A} Collaborative Science Platform},\n    booktitle    = {PKDD'13: Proc of the 2013 Machine Learning and Knowledge Discovery in Databases - European Conference},\n    series       = {Lecture Notes in Computer Science},\n    volume       = {8190},\n    pages        = {645--649},\n    publisher    = {Springer},\n    year         = {2013},\n    timestamp    = {Tue, 21 Mar 2023 21:00:11 +0100},\n    biburl       = {https://dblp.org/rec/conf/pkdd/RijnBTGUFWWBV13.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{FalknerKH18,\n    author       = {Stefan Falkner and\n                    Aaron Klein and\n                    Frank Hutter},\n    title        = {{BOHB:} Robust and Efficient Hyperparameter Optimization at Scale},\n    booktitle    = {ICML'18: Proceedings of the 35th International Conference on Machine Learning},\n    series       = {Proceedings of Machine Learning Research},\n    volume       = {80},\n    pages        = {1436--1445},\n    publisher    = {{PMLR}},\n    year         = {2018},\n    timestamp    = {Wed, 03 Apr 2019 18:17:30 +0200},\n    biburl       = {https://dblp.org/rec/conf/icml/FalknerKH18.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n@inproceedings{Krizhevsky09,\n  title={Learning Multiple Layers of Features from Tiny Images},\n  author={Alex Krizhevsky},\n  year={2009},\n}\n\n@article{Deng12,\n  title={The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]},\n  author={Li Deng},\n  journal={IEEE Signal Processing Magazine},\n  year={2012},\n  volume={29},\n  pages={141-142},\n}\n\n@article{Runge24,\n  title={RnaBench: A Comprehensive Library for In Silico RNA Modelling},\n  author={Frederic Runge and Karim Farid and Jorg K. H. Franke and Frank Hutter},\n  journal={bioRxiv},\n  year={2024},\n}\n\n@article{Kalvari20,\n  title={Rfam 14: expanded coverage of metagenomic, viral and microRNA families},\n  author={Ioanna Kalvari and Eric P. Nawrocki and Nancy Ontiveros-Palacios and Joanna Argasinska and Kevin Lamkiewicz and Manja Marz and Sam Griffiths-Jones and Claire Toffano-Nioche and Daniel Gautheret and Zasha Weinberg and Elena Rivas and Sean R. Eddy and Robert D. Finn and Alex Bateman and Anton I. Petrov},\n  journal={Nucleic Acids Research},\n  year={2020},\n  volume={49},\n  pages={D192 - D200},\n}\n\n@inproceedings{Suganthan05,\n  title={Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization},\n  author={Ponnuthurai Nagaratnam Suganthan and Nikolaus Hansen and Jing J. Liang and Kalyanmoy Deb and Ying-Ping Chen and Anne Auger and Santosh Tiwari},\n  year={2005},\n}\n\n@inproceedings{Netzer11,\n  title={Reading Digits in Natural Images with Unsupervised Feature Learning},\n  author={Yuval Netzer and Tao Wang and Adam Coates and A. Bissacco and Bo Wu and A. Ng},\n  year={2011},\n}\n\n@article{kushner1964,\n  title={A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise},\n  author={Kushner, Harold J},\n  year={1964}\n}\n\n@article{HansenARMTB21,\n    author       = {Nikolaus Hansen and\n                    Anne Auger and\n                    Raymond Ros and\n                    Olaf Mersmann and\n                    Tea Tusar and\n                    Dimo Brockhoff},\n    title        = {{COCO:} a platform for comparing continuous optimizers in a black-box\n        setting},\n    journal      = {Optim. Methods Softw.},\n    volume       = {36},\n    number       = {1},\n    pages        = {114--144},\n    year         = {2021},\n    timestamp    = {Mon, 03 Jan 2022 21:54:56 +0100},\n    biburl       = {https://dblp.org/rec/journals/oms/HansenARMTB21.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{LindauerEFBDBRS22,\n  author       = {Marius Lindauer and\n                  Katharina Eggensperger and\n                  Matthias Feurer and\n                  Andr{\\'{e}} Biedenkapp and\n                  Difan Deng and\n                  Carolin Benjamins and\n                  Tim Ruhkopf and\n                  Ren{\\'{e}} Sass and\n                  Frank Hutter},\n  title        = {{SMAC3:} {A} Versatile Bayesian Optimization Package for Hyperparameter\n                  Optimization},\n  journal      = {J. Mach. Learn. Res.},\n  volume       = {23},\n  pages        = {54:1--54:9},\n  year         = {2022},\n  timestamp    = {Wed, 07 Dec 2022 23:05:46 +0100},\n  biburl       = {https://dblp.org/rec/journals/jmlr/LindauerEFBDBRS22.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{LiSZCJLJG0Y0021,\n    author       = {Yang Li and\n                    Yu Shen and\n                    Wentao Zhang and\n                    Yuanwei Chen and\n                    Huaijun Jiang and\n                    Mingchao Liu and\n                    Jiawei Jiang and\n                    Jinyang Gao and\n                    Wentao Wu and\n                    Zhi Yang and\n                    Ce Zhang and\n                    Bin Cui},\n    title        = {OpenBox: {A} Generalized Black-box Optimization Service},\n    booktitle    = {{KDD} '21: The 27th {ACM} {SIGKDD} Conference on Knowledge Discovery and Data Mining},\n    pages        = {3209--3219},\n    publisher    = {{ACM}},\n    year         = {2021},\n    timestamp    = {Sat, 09 Apr 2022 12:35:08 +0200},\n    biburl       = {https://dblp.org/rec/conf/kdd/LiSZCJLJG0Y0021.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{GolovinSMKKS17,\n    author       = {Daniel Golovin and\n                    Benjamin Solnik and\n                    Subhodeep Moitra and\n                    Greg Kochanski and\n                    John Karro and\n                    D. Sculley},\n    title        = {Google Vizier: {A} Service for Black-Box Optimization},\n    booktitle    = {KDD'17: Proc. of the 23rd {ACM} {SIGKDD} International Conference on Knowledge Discovery and Data Mining},\n    pages        = {1487--1495},\n    publisher    = {{ACM}},\n    year         = {2017},\n    timestamp    = {Fri, 25 Dec 2020 01:14:16 +0100},\n    biburl       = {https://dblp.org/rec/conf/kdd/GolovinSMKKS17.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@misc{OpenAI2022ChatGPT,\n    author       = {OpenAI},\n    title        = {Introducing ChatGPT},\n    year         = {2022},\n    howpublished = {\\url{https://openai.com/blog/chatgpt/}}\n}\n\n@article{KudithipudiAB22,\n    author    = {Dhireesha Kudithipudi and\n                 Mario Aguilar-Simon and\n                 Jonathan Babb and\n                 et al.},\n    title     = {Biological underpinnings for lifelong learning machines},\n    journal   = {Nat. Mach. Intell.},\n    year      = {2022},\n    volume    = {4},\n    pages     = {196--210}\n}\n\n@article{LiCY23,\n    author    = {Ke Li and\n                 Renzhi Chen and\n                 Xin Yao},\n    title     = {A Data-Driven Evolutionary Transfer Optimization for Expensive Problems in Dynamic Environments},\n    journal   = {IEEE Trans. Evol. Comput.},\n    year      = {2023},\n    note      = {in press}\n}\n\n@book{Hazan22,\n    author    = {Elad Hazan},\n    title     = {Introduction to Online Convex Optimization},\n    publisher = {The MIT Press},\n    edition   = {Second},\n    series    = {Adaptive Computation and Machine Learning series},\n    year      = {2022},\n    month     = {September},\n    isbn      = {9780262046985}\n}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% KL references %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n@article{RiversLLT2022,\nauthor = {Cowen-Rivers, Alexander and Lyu, Wenlong and Tutunov, Rasul and Wang, Zhi and Grosnit, Antoine and Griffiths, Ryan-Rhys and Maravel, Alexandre and Hao, Jianye and Wang, Jun and Peters, Jan and Bou Ammar, Haitham},\nyear = {2022},\nmonth = {07},\npages = {},\ntitle = {HEBO: Pushing The Limits of Sample-Efficient Hyperparameter Optimisation},\nvolume = {74},\njournal = {Journal of Artificial Intelligence Research}\n}\n\n@article{LinYLWW22,\n  author       = {Kevin Lin and\n                  Zhengyuan Yang and\n                  Linjie Li and\n                  Jianfeng Wang and\n                  Lijuan Wang},\n  title        = {DEsignBench: Exploring and Benchmarking {DALL-E} 3 for Imagining Visual\n                  Design},\n  journal      = {CoRR},\n  volume       = {abs/2310.15144},\n  year         = {2023},\n  eprinttype    = {arXiv},\n  eprint       = {2310.15144},\n  timestamp    = {Mon, 30 Oct 2023 11:06:08 +0100},\n  biburl       = {https://dblp.org/rec/journals/corr/abs-2310-15144.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{EsterKSX96,\n  author       = {Martin Ester and\n                  Hans{-}Peter Kriegel and\n                  J{\\\"{o}}rg Sander and\n                  Xiaowei Xu},\n  editor       = {Evangelos Simoudis and\n                  Jiawei Han and\n                  Usama M. Fayyad},\n  title        = {A Density-Based Algorithm for Discovering Clusters in Large Spatial\n                  Databases with Noise},\n  booktitle    = {Proceedings of the Second International Conference on Knowledge Discovery\n                  and Data Mining (KDD-96), Portland, Oregon, {USA}},\n  pages        = {226--231},\n  publisher    = {{AAAI} Press},\n  timestamp    = {Sun, 05 Aug 2018 22:58:23 +0200},\n  biburl       = {https://dblp.org/rec/conf/kdd/EsterKSX96.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{MossLGR21,\n  author       = {Henry B. Moss and\n                  David S. Leslie and\n                  Javier Gonzalez and\n                  Paul Rayson},\n  title        = {{GIBBON:} General-purpose Information-Based {Bayesian} Optimisation},\n  journal      = {J. Mach. Learn. Res.},\n  volume       = {22},\n  pages        = {235:1--235:49},\n  year         = {2021},\n  timestamp    = {Mon, 31 Jan 2022 17:23:36 +0100},\n  biburl       = {https://dblp.org/rec/journals/jmlr/MossLGR21.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{WistubaSS18,\n    author       = {Martin Wistuba and\n                    Nicolas Schilling and\n                    Lars Schmidt{-}Thieme},\n    title        = {Scalable {Gaussian} process-based transfer surrogates for hyperparameter optimization},\n    journal      = {Mach. Learn.},\n    volume       = {107},\n    number       = {1},\n    pages        = {43--78},\n    year         = {2018},\n    timestamp    = {Mon, 02 Mar 2020 16:28:54 +0100},\n    biburl       = {https://dblp.org/rec/journals/ml/WistubaSS18.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{Wang2021,\n    title      = {Pre-trained {Gaussian} processes for {Bayesian} optimization},\n    author     = {Wang, Zi and Dahl, George E and Swersky, Kevin and Lee, Chansoo and Mariet, Zelda and Nado, Zachary and Gilmer, Justin and Snoek, Jasper and Ghahramani, Zoubin},\n    journal    = {arXiv preprint arXiv:2109.08215},\n    year       = {2021}\n}\n\n@article{LiSJZLLZC22,\n  author       = {Yang Li and\n                  Yu Shen and\n                  Huaijun Jiang and\n                  Wentao Zhang and\n                  Jixiang Li and\n                  Ji Liu and\n                  Ce Zhang and\n                  Bin Cui},\n  title        = {Hyper-Tune: Towards Efficient Hyper-parameter Tuning at Scale},\n  journal      = {Proc. {VLDB} Endow.},\n  volume       = {15},\n  number       = {6},\n  pages        = {1256--1265},\n  year         = {2022},\n  timestamp    = {Sat, 28 Oct 2023 13:59:30 +0200},\n  biburl       = {https://dblp.org/rec/journals/pvldb/LiSJZLLZC22.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{isaacs06,\n  title={RNA synthetic biology},\n  author={Isaacs, Farren J and Dwyer, Daniel J and Collins, James J},\n  journal={Nature biotechnology},\n  volume={24},\n  number={5},\n  pages={545--554},\n  year={2006},\n  publisher={Nature Publishing Group US New York}\n}\n\n\n\n@article{pineda2021hpob,\n  author    = {Sebastian Pineda{-}Arango and\n               Hadi S. Jomaa and\n               Martin Wistuba and\n               Josif Grabocka},\n  title     = {{HPO-B:} {A} Large-Scale Reproducible Benchmark for Black-Box {HPO} based on OpenML},\n  booktitle    = {NIPS'21: Proc of the 2021 Neural Information Processing Systems Track on Datasets and Benchmarks},\n  year      = {2021}\n}\n\n\n\n@article{ZhangCLWTLC22,\n  author       = {Xinyi Zhang and\n                  Zhuo Chang and\n                  Yang Li and\n                  Hong Wu and\n                  Jian Tan and\n                  Feifei Li and\n                  Bin Cui},\n  title        = {Facilitating Database Tuning with Hyper-Parameter Optimization: {A}\n                  Comprehensive Experimental Evaluation},\n  journal      = {Proc. {VLDB} Endow.},\n  volume       = {15},\n  number       = {9},\n  pages        = {1808--1821},\n  year         = {2022},\n  timestamp    = {Mon, 23 Oct 2023 15:31:40 +0200},\n  biburl       = {https://dblp.org/rec/journals/pvldb/ZhangCLWTLC22.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@article{KuhnPTB18,\n  author       = {Daniel K{\\\"{u}}hn and\n                  Philipp Probst and\n                  Janek Thomas and\n                  Bernd Bischl},\n  title        = {Automatic Exploration of Machine Learning Experiments on OpenML},\n  journal      = {CoRR},\n  volume       = {abs/1806.10961},\n  year         = {2018},\n  eprinttype    = {arXiv},\n  eprint       = {1806.10961},\n  timestamp    = {Mon, 13 Aug 2018 16:47:31 +0200},\n  biburl       = {https://dblp.org/rec/journals/corr/abs-1806-10961.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{JiangSLZZ,\n  author       = {Huaijun Jiang and\n                  Yu Shen and\n                  Yang Li and\n                  Wentao Zhang and\n                  Ce Zhang and\n                  Bin Cui},\n  title        = {OpenBox: {A} Python Toolkit for Generalized Black-box Optimization},\n  journal      = {CoRR},\n  volume       = {abs/2304.13339},\n  year         = {2023},\n  eprinttype    = {arXiv},\n  eprint       = {2304.13339},\n  timestamp    = {Thu, 12 Oct 2023 13:32:35 +0200},\n  biburl       = {https://dblp.org/rec/journals/corr/abs-2304-13339.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@article{HEBO22,\n  author       = {Alexander I. Cowen{-}Rivers and\n                  Wenlong Lyu and\n                  Zhi Wang and\n                  Rasul Tutunov and\n                  Jianye Hao and\n                  Jun Wang and\n                  Haitham Bou{-}Ammar},\n  title        = {{HEBO:} Heteroscedastic Evolutionary {Bayesian} Optimisation},\n  journal      = {CoRR},\n  volume       = {abs/2012.03826},\n  year         = {2020},\n  eprinttype    = {arXiv},\n  eprint       = {2012.03826},\n  timestamp    = {Thu, 10 Nov 2022 17:04:22 +0100},\n  biburl       = {https://dblp.org/rec/journals/corr/abs-2012-03826.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{Pyzer-Knapp18,\n  author       = {Edward O. Pyzer{-}Knapp},\n  title        = {{Bayesian} optimization for accelerated drug discovery},\n  journal      = {{IBM} J. Res. Dev.},\n  volume       = {62},\n  number       = {6},\n  pages        = {2:1--2:7},\n  year         = {2018},\n  timestamp    = {Fri, 13 Mar 2020 10:54:17 +0100},\n  biburl       = {https://dblp.org/rec/journals/ibmrd/Pyzer-Knapp18.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{MinGO21,\n  author       = {Alan Tan Wei Min and\n                  Abhishek Gupta and\n                  Yew{-}Soon Ong},\n  title        = {Generalizing Transfer {Bayesian} Optimization to Source-Target Heterogeneity},\n  journal      = {{IEEE} Trans Autom. Sci. Eng.},\n  volume       = {18},\n  number       = {4},\n  pages        = {1754--1765},\n  year         = {2021},\n  timestamp    = {Wed, 03 Nov 2021 08:27:14 +0100},\n  biburl       = {https://dblp.org/rec/journals/tase/MinGO21.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{JiangHQHY18,\n  author       = {Min Jiang and\n                  Zhongqiang Huang and\n                  Liming Qiu and\n                  Wenzhen Huang and\n                  Gary G. Yen},\n  title        = {Transfer Learning-Based Dynamic Multiobjective Optimization Algorithms},\n  journal      = {{IEEE} Trans. Evol. Comput.},\n  volume       = {22},\n  number       = {4},\n  pages        = {501--514},\n  year         = {2018},\n  timestamp    = {Tue, 12 May 2020 16:50:56 +0200},\n  biburl       = {https://dblp.org/rec/journals/tec/JiangHQHY18.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{JiangWQGGT21,\n  author       = {Min Jiang and\n                  Zhenzhong Wang and\n                  Liming Qiu and\n                  Shihui Guo and\n                  Xing Gao and\n                  Kay Chen Tan},\n  title        = {A Fast Dynamic Evolutionary Multiobjective Algorithm via Manifold\n                  Transfer Learning},\n  journal      = {{IEEE} Trans. Cybern.},\n  volume       = {51},\n  number       = {7},\n  pages        = {3417--3428},\n  year         = {2021},\n  timestamp    = {Sat, 31 Jul 2021 17:21:45 +0200},\n  biburl       = {https://dblp.org/rec/journals/tcyb/JiangWQGGT21.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@article{QiaoYQ0SYLT23,\n  author       = {Kangjia Qiao and\n                  Kunjie Yu and\n                  Boyang Qu and\n                  Jing Liang and\n                  Hui Song and\n                  Caitong Yue and\n                  Hongyu Lin and\n                  Kay Chen Tan},\n  title        = {Dynamic Auxiliary Task-Based Evolutionary Multitasking for Constrained\n                  Multiobjective Optimization},\n  journal      = {{IEEE} Trans. Evol. Comput.},\n  volume       = {27},\n  number       = {3},\n  pages        = {642--656},\n  year         = {2023},\n  timestamp    = {Thu, 15 Jun 2023 21:57:41 +0200},\n  biburl       = {https://dblp.org/rec/journals/tec/QiaoYQ0SYLT23.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{LiuW19,\n  author       = {Zhi{-}Zhong Liu and\n                  Yong Wang},\n  title        = {Handling Constrained Multiobjective Optimization Problems With Constraints\n                  in Both the Decision and Objective Spaces},\n  journal      = {{IEEE} Trans. Evol. Comput.},\n  volume       = {23},\n  number       = {5},\n  pages        = {870--884},\n  year         = {2019},\n  timestamp    = {Tue, 12 May 2020 16:50:56 +0200},\n  biburl       = {https://dblp.org/rec/journals/tec/LiuW19.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{ZhaoYSM22,\n  author       = {Qi Zhao and\n                  Bai Yan and\n                  Yuhui Shi and\n                  Martin Middendorf},\n  title        = {Evolutionary Dynamic Multiobjective Optimization via Learning From\n                  Historical Search Process},\n  journal      = {{IEEE} Trans. Cybern.},\n  volume       = {52},\n  number       = {7},\n  pages        = {6119--6130},\n  year         = {2022},\n  timestamp    = {Mon, 25 Jul 2022 08:40:11 +0200},\n  biburl       = {https://dblp.org/rec/journals/tcyb/ZhaoYSM22.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{GuptaOF16,\n  author       = {Abhishek Gupta and\n                  Yew{-}Soon Ong and\n                  Liang Feng},\n  title        = {Multifactorial Evolution: Toward Evolutionary Multitasking},\n  journal      = {{IEEE} Trans. Evol. Comput.},\n  volume       = {20},\n  number       = {3},\n  pages        = {343--357},\n  year         = {2016},\n  timestamp    = {Sun, 25 Jul 2021 11:39:56 +0200},\n  biburl       = {https://dblp.org/rec/journals/tec/GuptaOF16.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{GuptaOFT17,\n  author       = {Abhishek Gupta and\n                  Yew{-}Soon Ong and\n                  Liang Feng and\n                  Kay Chen Tan},\n  title        = {Multiobjective Multifactorial Optimization in Evolutionary Multitasking},\n  journal      = {{IEEE} Trans. Cybern.},\n  volume       = {47},\n  number       = {7},\n  pages        = {1652--1665},\n  year         = {2017},\n  timestamp    = {Sun, 25 Jul 2021 11:39:09 +0200},\n  biburl       = {https://dblp.org/rec/journals/tcyb/GuptaOFT17.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@article{BaliGOT21,\n  author       = {Kavitesh Kumar Bali and\n                  Abhishek Gupta and\n                  Yew{-}Soon Ong and\n                  Puay Siew Tan},\n  title        = {Cognizant Multitasking in Multiobjective Multifactorial Evolution:\n                  {MO-MFEA-II}},\n  journal      = {{IEEE} Trans. Cybern.},\n  volume       = {51},\n  number       = {4},\n  pages        = {1784--1796},\n  year         = {2021},\n  timestamp    = {Tue, 01 Jun 2021 09:59:48 +0200},\n  biburl       = {https://dblp.org/rec/journals/tcyb/BaliGOT21.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n@article{DingYJC19,\n  author       = {Jinliang Ding and\n                  Cuie Yang and\n                  Yaochu Jin and\n                  Tianyou Chai},\n  title        = {Generalized Multitasking for Evolutionary Optimization of Expensive\n                  Problems},\n  journal      = {{IEEE} Trans. Evol. Comput.},\n  volume       = {23},\n  number       = {1},\n  pages        = {44--58},\n  year         = {2019},\n  timestamp    = {Tue, 12 May 2020 16:51:00 +0200},\n  biburl       = {https://dblp.org/rec/journals/tec/DingYJC19.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{JoyRGV19,\n  author       = {Tinu Theckel Joy and\n                  Santu Rana and\n                  Sunil Gupta and\n                  Svetha Venkatesh},\n  title        = {A flexible transfer learning framework for {Bayesian} optimization with convergence guarantee},\n  journal      = {Expert Syst. Appl.},\n  volume       = {115},\n  pages        = {656--672},\n  year         = {2019},\n  timestamp    = {Sat, 19 Oct 2019 19:03:17 +0200},\n  biburl       = {https://dblp.org/rec/journals/eswa/JoyRGV19.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{SnoekRSKSSPPA15,\n  author       = {Jasper Snoek and\n                  Oren Rippel and\n                  Kevin Swersky and\n                  Ryan Kiros and\n                  Nadathur Satish and\n                  Narayanan Sundaram and\n                  Md. Mostofa Ali Patwary and\n                  Prabhat and\n                  Ryan P. Adams},\n  title        = {Scalable {Bayesian} Optimization Using Deep Neural Networks},\n  booktitle    = {ICML'15: Proc. of the 2015 International Conference on Machine Learning},\n  volume       = {37},\n  pages        = {2171--2180},\n  publisher    = {JMLR.org},\n  year         = {2015},\n  timestamp    = {Wed, 29 May 2019 08:41:45 +0200},\n  biburl       = {https://dblp.org/rec/conf/icml/SnoekRSKSSPPA15.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n\n@inproceedings{YuanSASH16,\n  author={Yuan, Yuan and Ong, Yew-Soon and Gupta, Abhishek and Tan, Puay Siew and Xu, Hua},\n  title={Evolutionary multitasking in permutation-based combinatorial optimization problems: Realization with TSP, QAP, LOP, and JSP},\n  booktitle={2016 IEEE Region 10 Conference (TENCON)},\n  pages={3157--3164},\n  year={2016},\n  organization={IEEE}\n}\n\n@ariticle{gpyopt2016,\nauthor = {The GPyOpt authors},\ntitle = {{GPyOpt}: A Bayesian Optimization framework in python},\nyear = {2016}\n}\n\n@article{FengZGZZTQ21,\n  author       = {Liang Feng and\n                  Lei Zhou and\n                  Abhishek Gupta and\n                  Jinghui Zhong and\n                  Zexuan Zhu and\n                  Kay Chen Tan and\n                  Alex Kai Qin},\n  title        = {Solving Generalized Vehicle Routing Problem With Occasional Drivers\n                  via Evolutionary Multitasking},\n  journal      = {{IEEE} Trans. Cybern.},\n  volume       = {51},\n  number       = {6},\n  pages        = {3171--3184},\n  year         = {2021},\n  timestamp    = {Tue, 01 Jun 2021 09:59:45 +0200},\n  biburl       = {https://dblp.org/rec/journals/tcyb/FengZGZZTQ21.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{CollbergP16,\n  author       = {Christian S. Collberg and\n                  Todd A. Proebsting},\n  title        = {Repeatability in computer systems research},\n  journal      = {Commun. {ACM}},\n  volume       = {59},\n  number       = {3},\n  pages        = {62--69},\n  year         = {2016},\n  timestamp    = {Tue, 06 Nov 2018 12:51:42 +0100},\n  biburl       = {https://dblp.org/rec/journals/cacm/CollbergP16.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{LWFSMP94,\n  title={Fast folding and comparison of RNA secondary structures},\n  author={Hofacker, Ivo L and Fontana, Walter and Stadler, Peter F and Bonhoeffer, L Sebastian and Tacker, Manfred and Schuster, Peter and others},\n  journal={Monatshefte fur chemie},\n  volume={125},\n  pages={167--167},\n  year={1994},\n  publisher={SPRINGER VERLAG}\n}\n\n@article{Carl83,\n  title        = {Molecular Technology: Designing Proteins and Peptides},\n  author       = {Pabo, Carl},\n  journal      = {Nature},\n  volume       = {301},\n  number       = {5897},\n  pages        = {200},\n  year         = {1983},\n}\n\n@article{Stephen21,\n  title        = {{RCSB Protein Data Bank: Powerful New Tools for Exploring 3D Structures of Biological Macromolecules for Basic and Applied Research and Education in Fundamental Biology, Biomedicine, Biotechnology, Bioengineering, and Energy Sciences}},\n  author       = {Burley, Stephen K. and Bhikadiya, Charmi and Bi, Chunxiao and Bittrich, Sebastian and Chen, Li and Crichlow, Gregg V. and Christie, Cole H. and Dalenberg, Kenneth and Di Costanzo, Luigi and Duarte, Jose M. and others},\n  journal      = {Nucleic Acids Research},\n  volume       = {49},\n  number       = {D1},\n  pages        = {D437--D451},\n  year         = {2021},\n  publisher    = {Oxford University Press},\n}\n\n@article{Christine97,\n  title        = {{CATH--A Hierarchic Classification of Protein Domain Structures}},\n  author       = {Orengo, Christine A. and Michie, Alex D. and Jones, Susan and Jones, David T. and Swindells, Mark B. and Thornton, Janet M.},\n  journal      = {Structure},\n  volume       = {5},\n  number       = {8},\n  pages        = {1093--1109},\n  year         = {1997},\n  publisher    = {Elsevier},\n}\n\n@article{Yang05,\n  title        = {{TM-align: A Protein Structure Alignment Algorithm Based on the TM-score}},\n  author       = {Zhang, Yang and Skolnick, Jeffrey},\n  journal      = {Nucleic Acids Research},\n  volume       = {33},\n  number       = {7},\n  pages        = {2302--2309},\n  year         = {2005},\n  publisher    = {Oxford University Press},\n}\n\n@article{ChandraGOG18,\n  author       = {Rohitash Chandra and\n                  Abhishek Gupta and\n                  Yew{-}Soon Ong and\n                  Chi{-}Keong Goh},\n  title        = {Evolutionary Multi-task Learning for Modular Knowledge Representation\n                  in Neural Networks},\n  journal      = {Neural Process. Lett.},\n  volume       = {47},\n  number       = {3},\n  pages        = {993--1009},\n  year         = {2018},\n  timestamp    = {Thu, 14 Oct 2021 09:37:11 +0200},\n  biburl       = {https://dblp.org/rec/journals/npl/ChandraGOG18.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{BaoQSBYLC18,\n  author       = {Liang Bao and\n                  Yutao Qi and\n                  Mengqing Shen and\n                  Xiaoxuan Bu and\n                  Jusheng Yu and\n                  Qian Li and\n                  Ping Chen},\n  title        = {An Evolutionary Multitasking Algorithm for Cloud Computing Service\n                  Composition},\n  booktitle    = {Services - {SERVICES} 2018 - 14th World Congress, Held as Part of\n                  the Services Conference Federation, {SCF} 2018, Seattle, WA, USA,\n                  June 25-30, 2018, Proceedings},\n  series       = {Lecture Notes in Computer Science},\n  volume       = {10975},\n  pages        = {130--144},\n  publisher    = {Springer},\n  year         = {2018},\n  timestamp    = {Tue, 14 May 2019 10:00:53 +0200},\n  biburl       = {https://dblp.org/rec/conf/services2/BaoQSBYLC18.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{PerroneJSA18,\n  author       = {Valerio Perrone and\n                  Rodolphe Jenatton and\n                  Matthias W. Seeger and\n                  C{\\'{e}}dric Archambeau},\n  title        = {Scalable Hyperparameter Transfer Learning},\n  booktitle    = {NIPS'18: Proc of the 2018 Advances in Neural Information Processing Systems 31: Annual Conference\n                  on Neural Information Processing Systems 2018, NeurIPS 2018, December\n                  3-8, 2018, Montr{\\'{e}}al, Canada},\n  pages        = {6846--6856},\n  year         = {2018},\n  timestamp    = {Mon, 16 May 2022 15:41:51 +0200},\n  biburl       = {https://dblp.org/rec/conf/nips/PerroneJSA18.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{SchillingWDS15,\n  author       = {Nicolas Schilling and\n                  Martin Wistuba and\n                  Lucas Drumond and\n                  Lars Schmidt{-}Thieme},\n  title        = {Hyperparameter Optimization with Factorized Multilayer Perceptrons},\n  booktitle    = {ECML/PKDD'15: Proc. of the 2015 Machine Learning and Knowledge Discovery in Databases - European Conference},\n  volume       = {9285},\n  pages        = {87--103},\n  publisher    = {Springer},\n  year         = {2015},\n  timestamp    = {Mon, 30 Nov 2020 08:47:26 +0100},\n  biburl       = {https://dblp.org/rec/conf/pkdd/SchillingWDS15.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{Broder97,\n  author       = {Andrei Z. Broder},\n  editor       = {Bruno Carpentieri and\n                  Alfredo De Santis and\n                  Ugo Vaccaro and\n                  James A. Storer},\n  title        = {On the resemblance and containment of documents},\n  booktitle    = {Compression and Complexity of {SEQUENCES}},\n  pages        = {21--29},\n  publisher    = {{IEEE}},\n  year         = {1997},\n  timestamp    = {Wed, 16 Oct 2019 14:14:56 +0200},\n  biburl       = {https://dblp.org/rec/conf/sequences/Broder97.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{WangKK18,\n  author       = {Zi Wang and\n                  Beomjoon Kim and\n                  Leslie Pack Kaelbling},\n  title        = {Regret bounds for meta {Bayesian} optimization with an unknown {Gaussian}\n                  process prior},\n  booktitle    = {NIPS: Proc of the 2018 Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems},\n  pages        = {10498--10509},\n  year         = {2018},\n  timestamp    = {Thu, 17 Nov 2022 14:05:51 +0100},\n  biburl       = {https://dblp.org/rec/conf/nips/WangKK18.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n\n@inproceedings{LawZCHS19,\n  author       = {Ho Chung Leon Law and\n                  Peilin Zhao and\n                  Leung Sing Chan and\n                  Junzhou Huang and\n                  Dino Sejdinovic},\n  title        = {Hyperparameter Learning via Distributional Transfer},\n  booktitle    = {NIPS'19: Proc of the 2019 Advances in Neural Information Processing Systems Annual Conference},\n  pages        = {6801--6812},\n  year         = {2019},\n  timestamp    = {Mon, 16 May 2022 15:41:51 +0200},\n  biburl       = {https://dblp.org/rec/conf/nips/LawZCHS19.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{SpringenbergKFH16,\n  author       = {Jost Tobias Springenberg and\n                  Aaron Klein and\n                  Stefan Falkner and\n                  Frank Hutter},\n  title        = {{Bayesian} Optimization with Robust {Bayesian} Neural Networks},\n  booktitle    = {NIPS'16: Proc of the 2016 Advances in Neural Information Processing Systems},\n  pages        = {4134--4142},\n  year         = {2016},\n  timestamp    = {Mon, 16 May 2022 15:41:51 +0200},\n  biburl       = {https://dblp.org/rec/conf/nips/SpringenbergKFH16.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n\n@inproceedings{HutterHL11,\n  author       = {Frank Hutter and\n                  Holger H. Hoos and\n                  Kevin Leyton{-}Brown},\n  title        = {Sequential Model-Based Optimization for General Algorithm Configuration},\n  booktitle    = {LION'11: Proc. of the 2011 Learning and Intelligent Optimization},\n  volume       = {6683},\n  pages        = {507--523},\n  publisher    = {Springer},\n  year         = {2011},\n  timestamp    = {Sun, 02 Jun 2019 21:10:54 +0200},\n  biburl       = {https://dblp.org/rec/conf/lion/HutterHL11.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{BardenetBKS13,\n  author       = {R{\\'{e}}mi Bardenet and\n                  M{\\'{a}}ty{\\'{a}}s Brendel and\n                  Bal{\\'{a}}zs K{\\'{e}}gl and\n                  Mich{\\`{e}}le Sebag},\n  title        = {Collaborative hyperparameter tuning},\n  booktitle    = {ICML'13: Proc of the 2013 International Conference on Machine Learning},\n  volume       = {28},\n  pages        = {199--207},\n  publisher    = {JMLR.org},\n  year         = {2013},\n  timestamp    = {Wed, 29 May 2019 08:41:45 +0200},\n  biburl       = {https://dblp.org/rec/conf/icml/BardenetBKS13.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n\n@inproceedings{PfahringerBG00,\n    author       = {Bernhard Pfahringer and\n                    Hilan Bensusan and\n                    Christophe G. Giraud{-}Carrier},\n    title        = {Meta-Learning by Landmarking Various Learning Algorithms},\n    booktitle    = {ICML'00: Proc of the 17th International Conference on Machine Learning},\n    pages        = {743--750},\n    publisher    = {Morgan Kaufmann},\n    year         = {2000},\n    timestamp    = {Sun, 21 Feb 2010 20:54:50 +0100},\n    biburl       = {https://dblp.org/rec/conf/icml/PfahringerBG00.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{YogatamaM14,\n    author       = {Dani Yogatama and\n                    Gideon Mann},\n    title        = {Efficient Transfer Learning Method for Automatic Hyperparameter Tuning},\n    booktitle    = {AISTATS'14: Proc of the 2014 International Conference on Artificial Intelligence and Statistics},\n    volume       = {33},\n    pages        = {1077--1085},\n    publisher    = {JMLR.org},\n    year         = {2014},\n    timestamp    = {Wed, 29 May 2019 08:41:44 +0200},\n    biburl       = {https://dblp.org/rec/conf/aistats/YogatamaM14.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{PoloczekWF17,\n    author       = {Matthias Poloczek and\n                    Jialei Wang and\n                    Peter I. Frazier},\n    title        = {Multi-Information Source Optimization},\n    booktitle    = {NIPs'17: Proc of the 2017 Annual Conference on Neural Information Processing Systems},\n    pages        = {4288--4298},\n    year         = {2017},\n    timestamp    = {Thu, 21 Jan 2021 15:15:21 +0100},\n    biburl       = {https://dblp.org/rec/conf/nips/PoloczekWF17.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{WistubaSS16,\n    author       = {Martin Wistuba and\n                    Nicolas Schilling and\n                    Lars Schmidt{-}Thieme},\n    title        = {Two-Stage Transfer Surrogate Model for Automatic Hyperparameter Optimization},\n    booktitle    = {ECML/PKDD'16: Proc. of the 2016 Machine Learning and Knowledge Discovery in Databases.},\n    volume       = {9851},\n    pages        = {199--214},\n    publisher    = {Springer},\n    year         = {2016},\n    timestamp    = {Thu, 05 Dec 2019 17:07:16 +0100},\n    biburl       = {https://dblp.org/rec/conf/pkdd/WistubaSS16.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{FeurerBE15,\n    author       = {Matthias Feurer and\n                    Benjamin Letham and\n                    Eytan Bakshy},\n    title        = {Scalable meta-learning for {Bayesian} optimization using ranking-weighted {Gaussian} process ensembles},\n    booktitle    = {ICML 2018 AutoML Workshop},\n    volume       = {7},\n    pages        = {1--15},\n    year         = {2018}\n}\n\n@article{Javidian19,\n  author       = {Mohammad Ali Javidian and\n                  Pooyan Jamshidi and\n                  Marco Valtorta},\n  title        = {Transfer Learning for Performance Modeling of Configurable Systems:\n                  {A} Causal Analysis},\n  journal      = {CoRR},\n  volume       = {abs/1902.10119},\n  year         = {2019},\n  eprinttype    = {arXiv},\n  eprint       = {1902.10119},\n  timestamp    = {Tue, 21 May 2019 18:03:38 +0200},\n  biburl       = {https://dblp.org/rec/journals/corr/abs-1902-10119.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{ZhuH23,\n  author       = {Mingxuan Zhu and\n                  Dan Hao},\n  title        = {Compiler Auto-Tuning via Critical Flag Selection},\n  booktitle    = {ASE'23: The Proceeding of 2023 {IEEE/ACM} International Conference on Automated Software Engineering},\n  pages        = {1000--1011},\n  publisher    = {{IEEE}},\n  year         = {2023},\n  timestamp    = {Thu, 16 Nov 2023 09:03:51 +0100},\n  biburl       = {https://dblp.org/rec/conf/kbse/ZhuH23.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{Nair0MSA20,\n  author       = {Vivek Nair and\n                  Zhe Yu and\n                  Tim Menzies and\n                  Norbert Siegmund and\n                  Sven Apel},\n  title        = {Finding Faster Configurations Using {FLASH}},\n  journal      = {{IEEE} Trans. Software Eng.},\n  volume       = {46},\n  number       = {7},\n  pages        = {794--811},\n  year         = {2020},\n  timestamp    = {Fri, 31 Jul 2020 17:07:30 +0200},\n  biburl       = {https://dblp.org/rec/journals/tse/Nair0MSA20.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{RothfussHCK21,\n  author       = {Jonas Rothfuss and\n                  Dominique Heyn and\n                  Jinfan Chen and\n                  Andreas Krause},\n  title        = {Meta-Learning Reliable Priors in the Function Space},\n  booktitle    = {NIPS'21: Proc of the 2021 Advances in Neural Information Processing Systems},\n  pages        = {280--293},\n  year         = {2021},\n  timestamp    = {Tue, 03 May 2022 16:20:46 +0200},\n  biburl       = {https://dblp.org/rec/conf/nips/RothfussHCK21.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{ChenXCZ21,\n    author       = {Junjie Chen and\n                    Ningxin Xu and\n                    Peiqi Chen and\n                    Hongyu Zhang},\n    title        = {Efficient Compiler Autotuning via {Bayesian} Optimization},\n    booktitle    = {ICSE'21: Proc of the 43rd {IEEE/ACM} International Conference on Software Engineering},\n    pages        = {1198--1209},\n    publisher    = {{IEEE}},\n    year         = {2021},\n    timestamp    = {Mon, 03 Jan 2022 22:27:59 +0100},\n    biburl       = {https://dblp.org/rec/conf/icse/0003XC021.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{DalibardSY17,\n  author       = {Valentin Dalibard and\n                  Michael Schaarschmidt and\n                  Eiko Yoneki},\n  title        = {{BOAT:} Building Auto-Tuners with Structured {Bayesian} Optimization},\n  booktitle    = {WWW'17: Proc of the 2017 International Conference on World Wide Web,\n                  {WWW} 2017, Perth, Australia, April 3-7, 2017},\n  pages        = {479--488},\n  publisher    = {{ACM}},\n  year         = {2017},\n  timestamp    = {Tue, 06 Nov 2018 16:57:08 +0100},\n  biburl       = {https://dblp.org/rec/conf/www/DalibardSY17.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n\n@inproceedings{YaoWHXHL21,\n  author       = {Huaxiu Yao and\n                  Ying Wei and\n                  Long{-}Kai Huang and\n                  Ding Xue and\n                  Junzhou Huang and\n                  Zhenhui Li},\n  title        = {Functionally Regionalized Knowledge Transfer for Low-resource Drug\n                  Discovery},\n  booktitle    = {NIPS'21: Proc of the 2021 Advances in Neural Information Processing Systems},\n  pages        = {8256--8268},\n  year         = {2021},\n  timestamp    = {Tue, 03 May 2022 16:20:47 +0200},\n  biburl       = {https://dblp.org/rec/conf/nips/YaoWHXHL21.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{ErikssonPGTP19,\n    author       = {David Eriksson and\n                    Michael Pearce and\n                    Jacob R. Gardner and\n                    Ryan Turner and\n                    Matthias Poloczek},\n    title        = {Scalable Global Optimization via Local {Bayesian} Optimization},\n    booktitle    = {NIPS'19: Proc of the 32nd Annual Conference on Neural Information Processing Systems},\n    pages        = {5497--5508},\n    year         = {2019},\n    timestamp    = {Mon, 16 May 2022 15:41:51 +0200},\n    biburl       = {https://dblp.org/rec/conf/nips/ErikssonPGTP19.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{PfistererSMBB22,\n  author       = {Florian Pfisterer and\n                  Lennart Schneider and\n                  Julia Moosbauer and\n                  Martin Binder and\n                  Bernd Bischl},\n  title        = {{YAHPO} Gym - An Efficient Multi-Objective Multi-Fidelity Benchmark\n                  for Hyperparameter Optimization},\n  booktitle    = {AutoML'22: Proc of the 2022 International Conference on Automated Machine Learning},\n  series       = {Proceedings of Machine Learning Research},\n  volume       = {188},\n  pages        = {3/1--39},\n  publisher    = {{PMLR}},\n  year         = {2022},\n  timestamp    = {Mon, 28 Nov 2022 12:30:36 +0100},\n  biburl       = {https://dblp.org/rec/conf/automl/PfistererSMBB22.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{hirose2021bench,\n  title={{NAS-HPO-Bench-II}: A Benchmark Dataset on Joint Optimization of Convolutional Neural Network Architecture and Training Hyperparameters},\n  author={Hirose, Yoichi and Yoshinari, Nozomu and Shirakawa,  Shinichi},\n  booktitle={Proceedings of the 13th Asian Conference on Machine Learning},\n  year={2021}\n}\n@inproceedings{duan2021transnas,\n  title = {TransNAS-Bench-101: Improving Transferability and Generalizability of Cross-Task Neural Architecture Search},\n  author = {Duan, Yawen and Chen, Xin and Xu, Hang and Chen, Zewei and Liang, Xiaodan and Zhang, Tong and Li, Zhenguo},\n  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n  pages = {5251--5260},\n  year = {2021}\n}\n\n@inproceedings{EggenspergerMMF21,\n  author       = {Katharina Eggensperger and\n                  Philipp M{\\\"{u}}ller and\n                  Neeratyoy Mallik and\n                  Matthias Feurer and\n                  Ren{\\'{e}} Sass and\n                  Aaron Klein and\n                  Noor H. Awad and\n                  Marius Lindauer and\n                  Frank Hutter},\n  title        = {HPOBench: {A} Collection of Reproducible Multi-Fidelity Benchmark\n                  Problems for {HPO}},\n  booktitle    = {NIPS'21: Proc of the 2021 Neural Information Processing Systems Track on Datasets and Benchmarks 1},\n  year         = {2021},\n  timestamp    = {Thu, 05 May 2022 16:53:59 +0200},\n  biburl       = {https://dblp.org/rec/conf/nips/EggenspergerMMF21.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{LiJDRT17,\n  author       = {Lisha Li and\n                  Kevin G. Jamieson and\n                  Giulia DeSalvo and\n                  Afshin Rostamizadeh and\n                  Ameet Talwalkar},\n  title        = {Hyperband: Bandit-Based Configuration Evaluation for Hyperparameter\n                  Optimization},\n  booktitle    = {ICLR'17: Proc of the 2017 International Conference on Learning Representations},\n  publisher    = {OpenReview.net},\n  year         = {2017},\n  timestamp    = {Thu, 25 Jul 2019 14:26:05 +0200},\n  biburl       = {https://dblp.org/rec/conf/iclr/LiJDRT17.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n\n@inproceedings{LindauerH18,\n  author       = {Marius Lindauer and\n                  Frank Hutter},\n  editor       = {Sheila A. McIlraith and\n                  Kilian Q. Weinberger},\n  title        = {Warmstarting of Model-Based Algorithm Configuration},\n  booktitle    = {AAAI'18: Proc of the 2018 {AAAI} Conference on Artificial Intelligence},\n  pages        = {1355--1362},\n  publisher    = {{AAAI} Press},\n  year         = {2018},\n  timestamp    = {Mon, 04 Sep 2023 16:50:25 +0200},\n  biburl       = {https://dblp.org/rec/conf/aaai/LindauerH18.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n\n@inproceedings{SrinivasKKS10,\n    author       = {Niranjan Srinivas and\n                    Andreas Krause and\n                    Sham M. Kakade and\n                    Matthias W. Seeger},\n    title        = {{Gaussian} Process Optimization in the Bandit Setting: No Regret and\n        Experimental Design},\n    booktitle    = {ICML'10: Proc. of the 27th International Conference on Machine Learning},\n    pages        = {1015--1022},\n    publisher    = {Omnipress},\n    year         = {2010},\n    timestamp    = {Tue, 23 Jul 2019 15:03:10 +0200},\n    biburl       = {https://dblp.org/rec/conf/icml/SrinivasKKS10.bib},\n    bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n@article{Tantithamthavorn19,\n  author       = {Chakkrit Tantithamthavorn and\n                  Shane McIntosh and\n                  Ahmed E. Hassan and\n                  Kenichi Matsumoto},\n  title        = {The Impact of Automated Parameter Optimization on Defect Prediction\n                  Models},\n  journal      = {{IEEE} Trans. Software Eng.},\n  volume       = {45},\n  number       = {7},\n  pages        = {683--711},\n  year         = {2019},\n  timestamp    = {Thu, 08 Aug 2019 11:07:40 +0200},\n  biburl       = {https://dblp.org/rec/journals/tse/Tantithamthavorn19.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{Wilcoxon1945IndividualCB,\n    author = {Frank Wilcoxon},\n    title  = {Individual Comparisons by Ranking Methods},\n    year   = {1945}\n}\n\n@article{VarghaD00,\n    author    ={Andr{\\'a}s Vargha and \n                Harold D. Delaney},\n    title     = {A Critique and Improvement of the CL Common Language Effect Size Statistics of McGraw and Wong},\n    journal   = {J. Educ. Behav. Stat.},\n    volume    = {25},\n    number    = {2},\n    pages     = {101-132},\n    year      = {2000}\n}\n\n\n@article{HennigS12,\n  author       = {Philipp Hennig and\n                  Christian J. Schuler},\n  title        = {Entropy Search for Information-Efficient Global Optimization},\n  journal      = {J. Mach. Learn. Res.},\n  volume       = {13},\n  pages        = {1809--1837},\n  year         = {2012},\n  timestamp    = {Thu, 02 Jun 2022 13:58:57 +0200},\n  biburl       = {https://dblp.org/rec/journals/jmlr/HennigS12.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{JonesSW98,\n  author       = {Donald R. Jones and\n                  Matthias Schonlau and\n                  William J. Welch},\n  title        = {Efficient Global Optimization of Expensive Black-Box Functions},\n  journal      = {J. Glob. Optim.},\n  volume       = {13},\n  number       = {4},\n  pages        = {455--492},\n  year         = {1998},\n  timestamp    = {Fri, 11 Sep 2020 13:04:22 +0200},\n  biburl       = {https://dblp.org/rec/journals/jgo/JonesSW98.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{Cowen-Rivers2022,\nauthor = {Cowen-Rivers, Alexander and Lyu, Wenlong and Tutunov, Rasul and Wang, Zhi and Grosnit, Antoine and Griffiths, Ryan-Rhys and Maravel, Alexandre and Hao, Jianye and Wang, Jun and Peters, Jan and Bou Ammar, Haitham},\nyear = {2022},\nmonth = {07},\npages = {},\ntitle = {HEBO: Pushing The Limits of Sample-Efficient Hyperparameter Optimisation},\nvolume = {74},\njournal = {Journal of Artificial Intelligence Research}\n}\n\n\n@article{BalandatKJDLB,\n  author       = {Maximilian Balandat and\n                  Brian Karrer and\n                  Daniel R. Jiang and\n                  Samuel Daulton and\n                  Benjamin Letham and\n                  Andrew Gordon Wilson and\n                  Eytan Bakshy},\n  title        = {BoTorch: Programmable {Bayesian} Optimization in {PyTorch}},\n  journal      = {CoRR},\n  volume       = {abs/1910.06403},\n  year         = {2019},\n  eprinttype    = {arXiv},\n  eprint       = {1910.06403},\n  timestamp    = {Wed, 16 Oct 2019 16:25:53 +0200},\n  biburl       = {https://dblp.org/rec/journals/corr/abs-1910-06403.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{AwadMH21,\n  author       = {Noor H. Awad and\n                  Neeratyoy Mallik and\n                  Frank Hutter},\n  editor       = {Zhi{-}Hua Zhou},\n  title        = {{DEHB:} Evolutionary Hyberband for Scalable, Robust and Efficient\n                  Hyperparameter Optimization},\n  booktitle    = {IJCAI'21: The proceedings of the 2021 International Joint Conference on Artificial Intelligence},\n  pages        = {2147--2153},\n  publisher    = {ijcai.org},\n  year         = {2021},\n  timestamp    = {Wed, 25 Aug 2021 17:11:16 +0200},\n  biburl       = {https://dblp.org/rec/conf/ijcai/AwadMH21.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{GPflowOpt2017,\n   author = {Knudde, Nicolas and {van der Herten}, Joachim and Dhaene, Tom and Couckuyt, Ivo},\n    title = \"{{GP}flow{O}pt: {A} {B}ayesian {O}ptimization {L}ibrary using Tensor{F}low}\",\n  journal = {arXiv preprint -- arXiv:1711.03845},\n  year    = {2017},\n}\n\n\n\n@inproceedings{WistubaSS15b,\n  author       = {Martin Wistuba and\n                  Nicolas Schilling and\n                  Lars Schmidt{-}Thieme},\n  title        = {Hyperparameter Search Space Pruning - A New Component for Sequential Model-Based Hyperparameter Optimization},\n  booktitle    = {ECML/PKDD'15: Proc of the 2015 Advances in Machine Learning and Knowledge Discovery in Databases},\n  volume       = {9285},\n  pages        = {104--119},\n  year         = {2015},\n  timestamp    = {Mon, 30 Nov 2020 08:47:26 +0100},\n  biburl       = {https://dblp.org/rec/conf/pkdd/WistubaSS15.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{PerroneS19,\n  author       = {Valerio Perrone and\n                  Huibin Shen},\n  title        = {Learning search spaces for {Bayesian} optimization: Another view of\n                  hyperparameter transfer learning},\n  booktitle    = {NIPS'19: Proc of the 2019 Advances in Neural Information Processing Systems},\n  pages        = {12751--12761},\n  year         = {2019},\n  timestamp    = {Mon, 16 May 2022 15:41:51 +0200},\n  biburl       = {https://dblp.org/rec/conf/nips/PerroneS19.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n@inproceedings{ChenSLW0DKKDRPF22,\n  author       = {Yutian Chen and\n                  Xingyou Song and\n                  Chansoo Lee and\n                  Zi Wang and\n                  Richard Zhang and\n                  David Dohan and\n                  Kazuya Kawakami and\n                  Greg Kochanski and\n                  Arnaud Doucet and\n                  Marc'Aurelio Ranzato and\n                  Sagi Perel and\n                  Nando de Freitas},\n  title        = {Towards Learning Universal Hyperparameter Optimizers with Transformers},\n  booktitle    = {NIPS'22: Proc of the 2022 Advances in Neural Information Processing Systems},\n  year         = {2022},\n  timestamp    = {Thu, 11 May 2023 17:08:22 +0200},\n  biburl       = {https://dblp.org/rec/conf/nips/ChenSLW0DKKDRPF22.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{BergstraBBK11,\n  author       = {James Bergstra and\n                  R{\\'{e}}mi Bardenet and\n                  Yoshua Bengio and\n                  Bal{\\'{a}}zs K{\\'{e}}gl},\n  title        = {Algorithms for Hyper-Parameter Optimization},\n  booktitle    = {NIPS'11: Proc. of the 2011 Advances in Neural Information Processing Systems.},\n  pages        = {2546--2554},\n  year         = {2011},\n  timestamp    = {Mon, 16 May 2022 15:41:51 +0200},\n  biburl       = {https://dblp.org/rec/conf/nips/BergstraBBK11.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{MullerFHH23,\n  author       = {Samuel M{\\\"{u}}ller and\n                  Matthias Feurer and\n                  Noah Hollmann and\n                  Frank Hutter},\n  title        = {PFNs4BO: In-Context Learning for {Bayesian} Optimization},\n  booktitle    = {ICML'23: Proc of the International Conference on Machine Learning},\n  volume       = {202},\n  pages        = {25444--25470},\n  publisher    = {{PMLR}},\n  year         = {2023},\n  timestamp    = {Mon, 28 Aug 2023 17:23:08 +0200},\n  biburl       = {https://dblp.org/rec/conf/icml/0005FHH23.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@book{RasmussenW06,\n    author    = {Carl Edward Rasmussen and\n                 Christopher K. I. Williams},\n    title     = {{Gaussian} processes for machine learning},\n    publisher = {{MIT} Press},\n    year      = {2006},\n    isbn      = {026218253X},\n    timestamp = {Wed, 26 Apr 2017 17:48:08 +0200},\n    biburl    = {https://dblp.org/rec/bib/books/lib/RasmussenW06},\n    bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n\n@book{Garnett23,\n    author    = {Roman Garnett},\n    title     = {{Bayesian} Optimization},\n    publisher = {Cambridge University Press},\n    year      = {2023},\n    month     = {January},\n    isbn      = {9781108348973}\n}\n\n\n\n@book{johnson1985,\n  title={The critical difference: Essays in the contemporary rhetoric of reading},\n  author={Johnson, Barbara},\n  year={1985},\n  publisher={JHU Press}\n}\n\n@book{MichieST94,\n  author       = {Donald Michie and\n                  David J. Spiegelhalter and\n                  Charles C. Taylor},\n  title        = {Machine Learning, Neural and Statistical Classification},\n  publisher    = {Ellis Horwood},\n  year         = {1994},\n  biburl       = {https://dblp.org/rec/books/eh/MichieST94.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@book{LeskovecRU14,\n  author       = {Jure Leskovec and\n                  Anand Rajaraman and\n                  Jeffrey D. Ullman},\n  title        = {Mining of Massive Datasets, 2nd Ed},\n  publisher    = {Cambridge University Press},\n  year         = {2014},\n  isbn         = {978-1107077232},\n  timestamp    = {Wed, 10 Jul 2019 10:47:04 +0200},\n  biburl       = {https://dblp.org/rec/books/cu/LeskovecRU14.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{TuRKSST22,\n  author       = {Renbo Tu and\n                  Nicholas Roberts and\n                  Mikhail Khodak and\n                  Junhong Shen and\n                  Frederic Sala and\n                  Ameet Talwalkar},\n  title        = {NAS-Bench-360: Benchmarking Neural Architecture Search on Diverse\n                  Tasks},\n  booktitle    = {nips'22: Proc. of the 35th Annual Conference\n                  on Neural Information Processing Systems},\n  year         = {2022},\n  timestamp    = {Mon, 08 Jan 2024 16:31:37 +0100},\n  biburl       = {https://dblp.org/rec/conf/nips/TuRKSST22.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@inproceedings{FreundS95,\n  author       = {Yoav Freund and\n                  Robert E. Schapire},\n  title        = {A decision-theoretic generalization of on-line learning and an application\n                  to boosting},\n  booktitle    = {EuroCOLT'95: Proc. of the Second Computational Learning Theory,},\n  series       = {Lecture Notes in Computer Science},\n  volume       = {904},\n  pages        = {23--37},\n  publisher    = {Springer},\n  year         = {1995},\n  timestamp    = {Tue, 14 May 2019 10:00:53 +0200},\n  biburl       = {https://dblp.org/rec/conf/eurocolt/FreundS95.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@article{JSSv033i01,\n author        = {Friedman, Jerome H. and Hastie, Trevor and Tibshirani, Rob},\n title         = {Regularization Paths for Generalized Linear Models via Coordinate Descent},\n volume        = {33},\n number        = {1},\n journal       = {Journal of Statistical Software},\n year          = {2010},\n pages         = {1–22}\n}\n\n\n@article {ShankerBHK23,\nauthor         = {Shanker, Varun R. and Bruun, Theodora U.J. and Hie, Brian L. and Kim, Peter S.},\ntitle          = {Inverse folding of protein complexes with a structure-informed language model enables unsupervised antibody evolution},\nyear           = {2023},\npublisher      = {Cold Spring Harbor Laboratory},\njournal        = {bioRxiv}\n}\n\n@phdthesis{Neal95,\n  author       = {Radford M. Neal},\n  title        = {Bayesian learning for neural networks},\n  school       = {University of Toronto, Canada},\n  year         = {1995},\n  timestamp    = {Wed, 10 Aug 2022 16:24:08 +0200},\n  biburl       = {https://dblp.org/rec/phd/ca/Neal95.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n\n@misc{Dua19,\n  author = {Dheeru Dua and Casey Graff},\n  title = {UCI Machine Learning Repository},\n  year = {2019},\n  url = {http://archive.ics.uci.edu/ml},\n  institution = {University of California, Irvine, School of Information and Computer Sciences}\n}\n\n@misc{ROBERTGA21, \n  author      = {ROBERT, Philippe and Greiff, Victor and Akbar, Rahmad},\n  title       = {Absolut! in silico antibody-antigen binding database},\n  publisher   = {Archive2014},\n  year        = {2021}\n}\n\n"
  },
  {
    "path": "docs/source/usage/algorithms.rst",
    "content": "Algorithmic objects\n===================\n\n.. admonition:: Overview\n   :class: info\n   \n   - :ref:`Register <register-new-algorithm>`: How to register a new algorithmic Object to :ref:`TransOPT <home>`\n   - :ref:`Supported Algorithms <alg>`: The list of the synthetic problems available in :ref:`TransOPT <home>`\n   - :ref:`Algorithmic Objects<alg-obj>`: The list of the protein inverse folding problems available in :ref:`TransOPT <home>`\n\n\n.. _register-new-algorithm:\n\nRegistering a New Algorithm in TransOPT\n---------------------------------------\n\nTo register a new algorithm object in TransOPT, follow the steps outlined below:\n\n1. **Import the Model Registry**\n\n   First, you need to import the `model_registry` from the `transopt.agent.registry` module:\n\n   .. code-block:: python\n\n      from transopt.agent.registry import model_registry\n\n2. **Define the Algorithm Object Name**\n\n   Next, use the registry to define the name of your algorithm object. For example:\n\n   .. code-block:: python\n\n      @model_registry.register(\"MHGP\")\n      class MHGP(Model):\n          pass\n\n   In this example, the algorithm object is named \"MHGP\".\n\n3. **Choose the Appropriate Base Class**\n\n   Depending on the type of algorithm object you are creating, you must inherit from a specific base class. TransOPT provides several algorithm modules, each corresponding to a different base class:\n\n   - **Surrogate Model**: Inherit from the `Model` class.\n   - **Initialization Design**: Inherit from the `Sampler` class.\n   - **Acquisition Function**: Inherit from the `AcquisitionBase` class.\n   - **Pretrain Module**: Inherit from the `PretrainBase` class.\n   - **Normalizer Module**: Inherit from the `NormalizerBase` class.\n\n   For instance, in the example provided, we are creating a surrogate model, so the `MHGP` class inherits from the `Model` base class.\n\n4. **Implement the Required Abstract Methods**\n\n   Once the class is defined, you need to implement several abstract methods that are required by the `Model` base class. These methods include:\n\n   .. code-block:: python\n\n      def meta_fit(\n          self,\n          source_X : List[np.ndarray],\n          source_Y : List[np.ndarray],\n          optimize: Union[bool, Sequence[bool]] = True,\n      ):\n          pass\n\n      def fit(\n          self,\n          X: np.ndarray,\n          Y: np.ndarray,\n          optimize: bool = False,\n      ):\n          pass\n\n      def predict(\n          self, X: np.ndarray, return_full: bool = False, with_noise: bool = False\n      ) -> Tuple[np.ndarray, np.ndarray]:\n          pass\n\n   - **meta_fit**: This method is used to fit meta-data. If your transfer optimization algorithm requires meta-data, this is where you should leverage it.\n   - **fit**: This method is used to fit the data for the current task.\n\nBy following these steps, you can successfully register a new algorithm object in TransOPT and implement the necessary functionality to integrate it into the framework.\n\n\n\n.. _alg:\n\nSupported Algorithms\n--------------------\n\nSearch space transform\n^^^^^^^^^^^^^^^^^^^^^^\n**Hyperparameter Search Space Pruning – A New Component for Sequential Model-Based Hyperparameter Optimization**:cite:`WistubaSS15b`\n\nThis method prunes ineffective regions of the hyperparameter search space by using past evaluations to guide the optimization. It identifies areas with low potential by analyzing the performance of sampled configurations and employing a surrogate model to predict future outcomes. Regions that consistently show poor performance or low expected improvement are marked as low potential. The method then updates the search process to focus on more promising regions, thereby improving optimization efficiency and reducing unnecessary evaluations.\n\n**Learning search spaces for Bayesian optimization- Another view of hyperparameter transfer learning**:cite:`PerroneS19`\n\nThe method replaces predefined search space with data-driven geometrical representations (e.g., ellipsoids and boxes) by analyzing historical data to identify high-performing regions and fitting these regions with geometrical shapes. This transformation narrows the search to promising areas, improving efficiency as the search space dimension increases.\n\nInitialization Design\n^^^^^^^^^^^^^^^^^^^^^^\n**FEW-SHOT BAYESIAN OPTIMIZATION WITH DEEP KERNEL SURROGATES**:cite:`WistubaG21`\n\nThis method leverages historical task data and an evolutionary algorithm to provide a warm-start initialization. By selecting hyperparameter settings that minimize a loss function across multiple tasks, the method accelerates optimization with fewer evaluations. \n\n**Initializing Bayesian Hyperparameter Optimization via Meta-Learning**:cite:`FeurerSH15`\n\nThis method introduces a meta-learning-based initialization for BO, improving the starting point by leveraging hyperparameter configurations that worked well on similar datasets. These similar datasets are identified through meta-features. The method calculates the distance between datasets using these meta-features, selecting the most similar ones to initialize the optimization process efficiently.\n\n**Learning Hyperparameter Optimization Initializations**:cite:`WistubaSS15a`\n\nThis method proposes to use a meta-loss function that is minimized through gradient-based optimization. By optimizing for a meta-loss derived from the response functions of past datasets, it generates entirely new configurations, whereas prior methods limited themselves to reusing configurations in similar datasets.\n\nSurrogate Model\n^^^^^^^^^^^^^^^^^^^^^^\n**Pre-trained Gaussian processes for Bayesian optimization**:cite:`Wang2021`\n\nIn this method, the surrogate model is built on a pre-trained GP with data from related tasks. This approach uses a KL divergence-based loss function to pre-train the GP, ensuring it captures similarities between the target function and past data. The pre-trained GP serves as the prior for BO, allowing the model to make better predictions with fewer observations by leveraging the pre-trained knowledge.\n\n**FEW-SHOT BAYESIAN OPTIMIZATION WITH DEEP KERNEL SURROGATES**\n\nIn this method, the surrogate model is a deep kernel Gaussian process that is meta-learned across multiple past tasks. This model enables quick adaptation to new tasks with limited evaluations. The deep kernel, which combines a neural network and a Gaussian process, provides uncertainty estimates, helping the model generalize across diverse tasks while being fine-tuned for new ones.\n\n**Google Vizier- A Service for Black-Box Optimization**:cite:`GolovinSMKKS17`\n\nThis method transfers source knowledge by using the posterior mean of the source task as the prior mean for the target task. This approach simplifies the transfer process by ignoring uncertainty from the source model and only leveraging the mean, which leads to reduced computational complexity while still incorporating valuable information from the source task. \n\n**PFNs4BO- In-Context Learning for Bayesian Optimization**:cite:`MullerFHH23`\n\nThis method utilizes a Transformer-based architecture called Prior-data Fitted Networks (PFNs). These networks are trained on synthetic datasets to approximate the posterior predictive distribution (PPD) through in-context learning. PFNs can be trained on any efficiently sampled prior distribution, such as Gaussian processes or Bayesian neural networks. By learning from diverse priors, the PFN surrogate model captures complex patterns in the optimization process, allowing it to make accurate predictions while maintaining flexibility to incorporate user-defined priors or handle spurious dimensions effectively.\n\n**Scalable Gaussian process-based transfer surrogates for hyperparameter optimization**:cite:`WistubaSS18`\n\nThis method introduces an ensemble of GP, where each GP is trained on a different past task. The model uses a weighted sum approach to combine the predictions from each GP. The weights are assigned based on how well each GP predicts the target task, with more relevant models receiving higher weights. \n\n**Scalable Meta-Learning for Bayesian Optimization using Ranking-Weighted Gaussian Process Ensembles**:cite:`FeurerBE15`\n\nThis method introduces Ranking-Weighted Gaussian Process Ensembles (RGPE). Similar to previous approaches, the surrogate model combines an ensemble of GPs. However, in RGPE, the weights are determined using a ranking loss function, which assesses how effectively each GP ranks the observations from the current task. GPs that rank the observations more accurately are assigned higher weights, reflecting their greater relevance to the task at hand.\n\n**Multi-Task Bayesian Optimization**:cite:`SwerskySA13`\n\nThis method uses multi-task Gaussian processes (MTGP) as the surrogate model. It trains a GP for each task and uses a shared covariance structure across tasks to improve predictive accuracy. By leveraging the relationships between tasks, the MTGP reduces the need for independent function evaluations, making the optimization process faster and more efficient.\n\n**Multi-Fidelity Bayesian Optimization via Deep Neural Networks**:cite:`LiXKZ20`\n\nIn this method, the surrogate model employs a deep neural network designed to handle multi-fidelity optimization tasks. The DNN surrogate models each fidelity with a neural network, and higher fidelities are conditioned on the outputs from lower fidelities. By stacking neural networks for each fidelity level, the model captures nonlinear relationships between different fidelities. This structure allows the surrogate to propagate information across fidelities, improving the accuracy of function estimation at higher fidelities while reducing computational costs.\n\n**BOHB: robust and efficient hyperparameter optimization at scale**:cite:`FalknerKH18`\n\nIn this method, the surrogate model uses a Tree-structured Parzen Estimator (TPE) to model the hyperparameter space. TPE builds separate probability models for good and bad configurations using kernel density estimation. The TPE model guides the search by maximizing the ratio between these models, effectively focusing on promising regions of the search space. \n\nAcquisition Function\n^^^^^^^^^^^^^^^^^^^^\n**Scalable Meta-Learning for Bayesian Optimization using Ranking-Weighted Gaussian Process Ensembles**\n\nIn RGPE, the acquisition function follows standard BO methods but integrates the ranking-weighted ensemble model. The ensemble combines predictions from multiple GPs, each weighted based on its ranking performance in relation to the current task. The acquisition function then uses this weighted ensemble to balance exploration and exploitation, ensuring that the most relevant past models are given greater influence when selecting the next point to evaluate \n\n**Scalable Gaussian process-based transfer surrogates for hyperparameter optimization**\n\nThis approach is referred to as the *transfer acquisition function* (TAF). The acquisition function balances exploration and exploitation by combining the predicted improvement from the new data with predicted improvements from previous tasks, weighted by their relevance. The weights are calculated the same as the model.\n\n**Multi-Task Bayesian Optimization**\n\nIn this method, the acquisition function extends the standard EI criterion to the multi-task setting. It dynamically selects which task to evaluate by considering the correlation between tasks. The acquisition function maximizes information gain per unit cost by balancing the evaluation of cheaper auxiliary tasks with more expensive primary tasks, using the entropy search strategy. \n\n**Multi-Fidelity Bayesian Optimization via Deep Neural Networks**\n\nIt aims to maximize the mutual information between the predicted maximum of the objective function and the next point to be evaluated. The acquisition function selects the input location and fidelity level that provide the highest benefit-cost ratio. By employing fidelity-wise moment matching and Gauss-Hermite quadrature to approximate the posterior distributions, the acquisition function ensures that both fidelity selection and input sampling are computationally efficient and well-informed.\n\n**BOHB:Robust and Efficient Hyperparameter Optimization at Scale**\n\nIt selects new configurations by maximizing the expected improvement, using kernel density estimates of good and bad configurations. BOHB combines this with a multi-fidelity approach, which allows the acquisition function to operate across different budget levels, efficiently balancing exploration and exploitation while scaling to large optimization tasks\n\n**Reinforced Few-Shot Acquisition Function Learning for Bayesian Optimization**:cite:`HsiehHL21`\n\nIn this method, the acquisition function is modeled with a deep Q-network (DQN), learning to balance exploration and exploitation as a reinforcement learning task. The DQN predicts sampling utility based on the posterior mean and variance, refined by a Bayesian variant that incorporates uncertainty to avoid overfitting.\n\n\n\n\n.. _alg-obj:\n\nList of Algorithmic Objects\n---------------------------\nThe optimization framework includes a variety of state-of-the-art algorithms, each designed with specific features to address different classes of optimization problems. The table below provides a summary of the key algorithms available, categorized by their class, convenience for use, targeted objective(s), and any constraints they impose.\n\n.. csv-table::\n   :header: \"Algorithmic Objects\", \"Type\", \"Source Algorithm\"\n   :widths: 60, 10, 100\n   :file: algorithms.csv\n\n\nReferences\n----------\n\n.. bibliography:: TOS.bib\n   :style: plain"
  },
  {
    "path": "docs/source/usage/cli.rst",
    "content": ".. _command_line_usage:\n\nCommand Line\n===============================\n\nTransOPT provides a command-line interface (CLI) that allows users to define and run optimization tasks directly from the terminal. This is facilitated by the `run_cli.py` script, which supports a wide range of customizable parameters.\n\nRunning the Command-Line Interface\n----------------------------------\n\nTo run the `run_cli.py` script, navigate to the directory containing the script and use the following command:\n\n.. code-block:: bash\n\n   python transopt/agent/run_cli.py [OPTIONS]\n\nWhere `[OPTIONS]` are the command-line arguments you can specify to customize the behavior of TransOPT.\n\nI. Command-Line Arguments\n^^^^^^^^^^^^^^^^^^^^^^^^^\nHere is a list of the main command-line arguments supported by the script:\n\n**Task Configuration**\n\n- **`-n, --task_name`**: Name of the task (default: `\"Sphere\"`).\n- **`-v, --num_vars`**: Number of variables (default: `2`).\n- **`-o, --num_objs`**: Number of objectives (default: `1`).\n- **`-f, --fidelity`**: Fidelity level of the task (default: `\"\"`).\n- **`-w, --workloads`**: Workloads associated with the task (default: `\"0\"`).\n- **`-bt, --budget_type`**: Type of budget (e.g., `\"Num_FEs\"`) (default: `\"Num_FEs\"`).\n- **`-b, --budget`**: Budget for the task, typically the number of function evaluations (default: `100`).\n\n**Optimizer Configuration**\n\n- **`-sr, --space_refiner`**: Space refiner method (default: `\"None\"`).\n- **`-srp, --space_refiner_parameters`**: Parameters for the space refiner (default: `\"\"`).\n- **`-srd, --space_refiner_data_selector`**: Data selector for the space refiner (default: `\"None\"`).\n- **`-srdp, --space_refiner_data_selector_parameters`**: Parameters for the data selector (default: `\"\"`).\n- **`-sp, --sampler`**: Sampling method (default: `\"random\"`).\n- **`-spi, --sampler_init_num`**: Initial number of samples (default: `22`).\n- **`-spp, --sampler_parameters`**: Parameters for the sampler (default: `\"\"`).\n- **`-spd, --sampler_data_selector`**: Data selector for the sampler (default: `\"None\"`).\n- **`-spdp, --sampler_data_selector_parameters`**: Parameters for the sampler's data selector (default: `\"\"`).\n- **`-pt, --pre_train`**: Pretraining method (default: `\"None\"`).\n- **`-ptp, --pre_train_parameters`**: Parameters for pretraining (default: `\"\"`).\n- **`-ptd, --pre_train_data_selector`**: Data selector for pretraining (default: `\"None\"`).\n- **`-ptdp, --pre_train_data_selector_parameters`**: Parameters for the pretraining data selector (default: `\"\"`).\n- **`-m, --model`**: Model used for optimization (default: `\"GP\"`).\n- **`-mp, --model_parameters`**: Parameters for the model (default: `\"\"`).\n- **`-md, --model_data_selector`**: Data selector for the model (default: `\"None\"`).\n- **`-mdp, --model_data_selector_parameters`**: Parameters for the model's data selector (default: `\"\"`).\n- **`-acf, --acquisition_function`**: Acquisition function used (default: `\"EI\"`).\n- **`-acfp, --acquisition_function_parameters`**: Parameters for the acquisition function (default: `\"\"`).\n- **`-acfd, --acquisition_function_data_selector`**: Data selector for the acquisition function (default: `\"None\"`).\n- **`-acfdp, --acquisition_function_data_selector_parameters`**: Parameters for the acquisition function's data selector (default: `\"\"`).\n- **`-norm, --normalizer`**: Normalization method (default: `\"Standard\"`).\n- **`-normp, --normalizer_parameters`**: Parameters for the normalizer (default: `\"\"`).\n- **`-normd, --normalizer_data_selector`**: Data selector for the normalizer (default: `\"None\"`).\n- **`-normdp, --normalizer_data_selector_parameters`**: Parameters for the normalizer's data selector (default: `\"\"`).\n\n**General Configuration**\n\n- **`-s, --seeds`**: Random seed for reproducibility (default: `0`).\n\nII. Example Usage\n^^^^^^^^^^^^^^^^^\nBelow are some example commands demonstrating how to use the CLI to run different tasks with varying configurations.\n\n**Example 1: Running a basic task with default parameters**\n\n.. code-block:: bash\n\n   python transopt/agent/run_cli.py -n MyTask -v 3 -o 1 -b 200\n\n**Example 2: Running a task with a specific model and acquisition function**\n\n.. code-block:: bash\n\n   python transopt/agent/run_cli.py -n MyTask -v 3 -o 2 -m RF -acf UCB -b 300\n\n**Example 3: Using custom parameters for the space refiner and sampler**\n\n.. code-block:: bash\n\n   python transopt/agent/run_cli.py -n MyTask -sr \"Prune\"  -sp \"lhs\" -spi 30 -b 300\n\nIII. Additional Notes\n^^^^^^^^^^^^^^^^^^^^^\n- The **random seed** is particularly important for ensuring that the results are reproducible. Make sure to specify the `--seeds` option if you want to run experiments that can be exactly replicated.\n- TransOPT's CLI is highly flexible, allowing you to tailor the optimization process to your specific needs by adjusting the parameters and options provided.\n\nBy following the instructions above, you can effectively use the TransOPT CLI to run and manage your optimization tasks.\n"
  },
  {
    "path": "docs/source/usage/data_manage.rst",
    "content": "Data Management\n===============\n\nThe `datamanager` module is designed to manage data generated during optimization tasks. It provides a structured approach for storing, querying, and transferring data across different optimization scenarios. This module is built with flexibility in mind, enabling efficient management of various optimization task requirements, from persistent storage to similarity-based searches.\n\nThe primary roles of the `datamanager` include:\n\n- **Data Storage**: Utilizes a database-backed storage system to persist data generated throughout the optimization process, ensuring that configurations, results, and metadata are readily accessible.\n- **Checkpointing**: Allows for saving the state of optimization tasks at specific points, facilitating recovery and continuation from those points in case of interruptions.\n- **Flexible Search Mechanisms**: Incorporates both metadata-based searches for filtering and querying task information and Locality-Sensitive Hashing (LSH) for identifying similar data points within large datasets.\n\nThis combination of features makes the `datamanager` particularly valuable in scenarios where optimization processes are iterative and data-intensive, requiring a balance between precise control over individual task data and the ability to draw insights from historical records.\n\nData Storage and Management\n---------------------------\n\nThe `datamanager` module's core lies in its robust data storage and management capabilities, allowing it to efficiently handle the various data generated during optimization tasks. At its heart, it utilizes an SQLite database to persistently store task-related data, configurations, and metadata, ensuring that important information is retained across task iterations.\n\nData Storage Mechanism\n**********************\nThe `datamanager` uses SQLite as a backend for storing data, chosen for its lightweight and self-contained nature, making it suitable for scenarios where a full-fledged database system may be unnecessary. It supports various data formats for easy integration:\n\n- **Dictionary**: Individual task data can be stored as dictionaries, with keys representing field names and values representing corresponding data.\n- **List**: Supports lists of dictionaries or lists for batch insertion, allowing multiple records to be added in a single operation.\n- **DataFrame/NumPy Arrays**: For more complex data structures, `datamanager` can insert rows from pandas DataFrames or NumPy arrays, automatically converting these into a database-compatible format.\n\nThis flexibility allows the `datamanager` to adapt to different data needs and store both simple and complex optimization data structures seamlessly.\n\nMetadata Table Design\n*********************\nA key feature of the `datamanager` is its use of metadata tables to store descriptive information about each optimization task, enabling more efficient querying and organization of data. The `_metadata` table is specifically designed to record detailed information about each stored optimization task, such as:\n\n- **table_name**: The name of the table where task data is stored, serving as the primary key.\n- **problem_name**: The name of the optimization problem, providing context for the stored task.\n- **dimensions**: The number of dimensions involved in the optimization problem.\n- **objectives**: The number of objectives being optimized.\n- **fidelities**: A textual representation of various fidelities or resolution levels within the task.\n- **budget_type** and **budget**: Information about the type and limits of the budget used in the optimization.\n- **space_refiner**, **sampler**, **model**, **pretrain**, **acf**, **normalizer**: Parameters related to the methodologies and models used in the optimization process.\n- **dataset_selectors**: A JSON-encoded structure representing the criteria for dataset selection.\n\nThe `_metadata` table serves as a centralized index of all stored tasks, allowing users to quickly retrieve and filter tasks based on their descriptive attributes.\n\nData Storage Flexibility\n************************\nThe design of the `datamanager` ensures that data storage remains flexible and adaptable. By supporting multiple input formats and allowing for dynamic database interactions, the module can handle diverse data requirements across different optimization tasks. This adaptability is crucial for optimization scenarios where the nature of the data may change based on the problem domain or the phase of the optimization process.\n\nThrough its use of a lightweight SQLite database, combined with a rich set of metadata, the `datamanager` provides a powerful yet simple way to manage and organize the data generated during optimization, laying the groundwork for more complex functionalities such as checkpointing and similarity-based searches.\n\n\nSimilarity Search and Data Reuse\n--------------------------------\n\nThe `datamanager` module includes advanced functionality for identifying similar data points and facilitating data reuse between optimization tasks with similar characteristics. This is particularly valuable in scenarios where insights from past optimization runs can be leveraged to accelerate new tasks, reducing the time and computational effort needed to reach optimal solutions.\n\nSimilarity Search Mechanisms\n****************************\nThe `datamanager` uses a combination of Locality-Sensitive Hashing (LSH) and MinHash techniques to perform similarity searches across stored optimization data. These techniques allow for efficient identification of data points that are similar but not identical, supporting exploratory optimization where near-optimal solutions can inform new tasks.\n\n- **Locality-Sensitive Hashing (LSH)**: LSH is employed to map similar data points into the same hash buckets with high probability. This approach reduces the dimensionality of the data and enables fast similarity searches by grouping data that are likely to be similar into the same buckets.\n- **MinHash Signatures**: MinHash is used to generate compact signatures for high-dimensional data (such as configuration states or text data). These signatures make it possible to estimate the similarity between data points by comparing their hashed values, which approximates the Jaccard similarity between sets.\n\nIntegration with the Manager for Task Similarity\n************************************************\nWithin the `manager`, the LSH mechanism is used to identify optimization tasks that share similar characteristics. This process involves creating a concise representation, or \"vector,\" of each task based on key properties like the number of variables, number of objectives, and descriptive information about the task.\n\n- **Vector Representation of Tasks**: To enable comparison, each optimization task is represented by a vector that summarizes important aspects of the task, such as:\n  - The complexity of the problem (e.g., number of variables and objectives).\n  - Descriptive details about the variables, which can capture the nature of the problem.\n  - The task's name or type, helping categorize similar tasks.\n\n  This vector encapsulates the essential details of each optimization task, providing a structured way to describe the nature of the problem. By converting tasks into such standardized vectors, it becomes possible to use LSH to efficiently identify similar tasks.\n\n- **Similarity Search Process**: The LSH mechanism uses these vectors to map tasks into clusters or hash buckets, where tasks with similar vectors are more likely to be grouped together. When a new task or query vector is introduced, the `manager` can quickly identify past tasks that fall into the same bucket, indicating similarity based on the problem structure and configuration.\n\n  This allows the `datamanager` to efficiently locate tasks with similar characteristics, enabling reuse of past results or configurations that may be relevant to the new task.\n\nApplications of Similarity-Based Data Reuse\n*******************************************\nThe similarity search and data reuse capabilities of the `datamanager` provide significant advantages in various optimization scenarios:\n\n- **Warm Start for Optimization**: By starting a new task with configurations similar to those that were effective in past tasks, users can perform \"warm starts\" that speed up convergence.\n- **Adaptive Optimization**: For tasks that require adjusting optimization parameters over time, the ability to find and utilize past similar configurations ensures that the adjustments are more efficient.\n- **Transfer Learning in Optimization**: When optimization tasks vary slightly across iterations (e.g., changing problem parameters or objectives), the `datamanager` helps carry over useful information from previous runs, acting as a form of transfer learning in optimization.\n\nThe integration of vector-based representations with LSH makes the `datamanager` a powerful tool for scenarios where similarity and reuse are critical. It enables users to not only store and manage data but also to leverage historical results for more efficient optimization processes.\n\n\nFlexible Data Querying\n----------------------\n\nThe `datamanager` module is equipped with versatile data querying capabilities, allowing users to perform both precise and approximate searches based on the needs of their optimization tasks. This flexibility ensures that data can be accessed efficiently, whether the goal is to find an exact match or to identify records that satisfy broader criteria.\n\nMetadata-Based Search\n*********************\nA key feature of the `datamanager` is its ability to perform metadata-based searches. This type of search leverages the detailed metadata stored about each optimization task, enabling users to filter and retrieve data based on descriptive attributes such as task name, problem type, and configuration settings.\n\n- **Dynamic Query Generation**: The `datamanager` constructs SQL `WHERE` clauses dynamically based on user-provided search criteria, allowing for flexible and complex queries. Users can specify one or multiple metadata fields as search parameters, and the module generates the appropriate SQL query to retrieve matching records.\n- **Example**: Using the `search_tables_by_metadata` method, users can search for optimization tasks based on criteria such as `problem_name`, `budget_type`, or other attributes stored in the `_metadata` table:\n  \n  .. code-block:: python\n  \n      search_params = {\n          \"problem_name\": \"example_problem\",\n          \"budget_type\": \"fixed\"\n      }\n      matching_tables = datamanager.search_tables_by_metadata(search_params)\n\n  This example retrieves the names of all tables associated with optimization tasks that match the given `problem_name` and `budget_type`, making it easier to filter data relevant to specific problem settings.\n\nPrecise Search and Filtering\n****************************\nWhile metadata-based search provides flexibility, the `datamanager` also supports precise search and filtering operations. These are particularly useful when specific task data needs to be retrieved without approximation:\n\n- **Primary Key and Index-Based Search**: For tasks or configurations that are uniquely identified by fields like `table_name` or `task_id`, the `datamanager` uses indexed fields for fast lookups, ensuring that searches for individual records are highly efficient.\n- **SQL Query Support**: Users can execute custom SQL queries to interact directly with the database, giving them complete control over the data retrieval process. This is useful when complex joins, aggregations, or advanced filtering conditions are required.\n- **Example**: A precise search can be performed to retrieve data entries associated with a specific task ID:\n\n  .. code-block:: python\n  \n      query = \"SELECT * FROM _metadata WHERE table_name = 'task_example'\"\n      task_data = datamanager.execute(query, fetchall=True)\n\n  This example demonstrates how to execute a custom SQL query to retrieve all metadata associated with a specific task table.\n\nCombining Flexible and Precise Queries\n**************************************\nThe true strength of the `datamanager` lies in its ability to combine flexible metadata-based search with precise data retrieval. This enables users to start with broad criteria to identify relevant tasks and then drill down into specific records for deeper analysis.\n\n- **Hybrid Search Strategies**: Users can first use metadata-based queries to identify relevant tasks and then apply precise searches to extract detailed data from those tasks.\n- **Example Workflow**:\n  \n  1. Use `search_tables_by_metadata` to find all tasks that match certain criteria (e.g., problem type).\n  2. Iterate over the results and use SQL queries to retrieve detailed data from each identified task.\n\n  .. code-block:: python\n  \n      search_params = {\"problem_name\": \"example_problem\"}\n      tables = datamanager.search_tables_by_metadata(search_params)\n      for table in tables:\n          data = datamanager.execute(f\"SELECT * FROM {table} WHERE objectives = 2\", fetchall=True)\n\n  This workflow allows users to identify relevant optimization tasks and then extract specific entries from each, providing a balance between breadth and depth in data retrieval.\n\nThe flexibility of the `datamanager`'s querying capabilities ensures that users can adapt their data retrieval strategies to their specific needs, whether they require a broad overview of multiple tasks or a detailed analysis of a single task. This makes it an indispensable tool for managing data in complex optimization environments.\n\n\nIntegration with Optimization Tasks\n-----------------------------------\n\nThe `datamanager` module is designed to integrate smoothly with optimization task workflows, providing an interface for storing, retrieving, and managing data throughout the lifecycle of these tasks. It acts as the central repository for task configurations, results, and intermediate data, enabling easy access and modification during different stages of optimization.\n\nTask Data Flow Management\n*************************\nThe `datamanager` plays a crucial role in managing the flow of data during the execution of optimization tasks. It ensures that data is stored in a structured manner, facilitating efficient access and modification throughout the optimization process. Key aspects include:\n\n- **Storing Initial Configurations**: When an optimization task starts, the initial configurations and parameters can be stored in the `datamanager`, creating a record of the starting point.\n- **Recording Intermediate Results**: As the optimization progresses, intermediate results and states can be stored, allowing users to analyze the trajectory of the process.\n- **Saving Final Outcomes**: Once the optimization task concludes, the final configurations and results are saved, creating a comprehensive record of the optimization process.\n\nThis structured approach to data storage ensures that all phases of the optimization process are properly recorded, making it easy to track changes and analyze outcomes.\n\nInteraction with Optimization Algorithms\n****************************************\nThe `datamanager` acts as a data backend that stores and retrieves task-related data as needed, allowing optimization algorithms to make data-driven decisions:\n\n- **Configuration Retrieval**: Optimization algorithms can query the `datamanager` to retrieve specific configurations or parameters that were effective in past tasks.\n- **Logging Adjustments**: As algorithms adjust parameters or explore new solutions, they can store updated configurations, ensuring that each adjustment is logged.\n\nThese interactions ensure that the optimization process is well-documented and that historical data can be leveraged effectively, enhancing the decision-making process of optimization algorithms.\n\n\nDesign Considerations\n---------------------\n\nThe design of the `datamanager` module is driven by the need to balance flexibility, performance, and scalability in handling optimization task data. This section outlines the key design considerations that guided the development of the module, ensuring that it meets the diverse needs of users working with complex optimization problems.\n\nModularity and Extensibility\n****************************\nThe `datamanager` is designed with a modular architecture, where each major function—such as data storage, similarity search, and checkpointing—is encapsulated in a dedicated component. This modular design offers several advantages:\n\n- **Separation of Concerns**: Each component handles a specific aspect of data management, allowing users to focus on individual functionalities without being overwhelmed by the entire system.\n- **Ease of Maintenance**: With separate modules for each function, updates and bug fixes can be applied to specific areas without affecting the rest of the system, making maintenance simpler.\n- **Extensibility**: New features or modifications can be added without disrupting the existing functionality. For example, adding support for a different database backend or a new similarity search technique can be achieved by extending the relevant module without needing to overhaul the entire system.\n\nPerformance Optimization\n************************\nGiven the data-intensive nature of optimization tasks, performance was a critical consideration in the design of the `datamanager`. Several strategies were employed to ensure that the module can handle large datasets and complex queries efficiently:\n\n- **Indexed Storage**: The use of indexes on key fields in the SQLite database—such as `table_name` and `problem_name`—ensures that searches and data retrievals are fast, even when the database grows in size.\n- **Batch Data Insertion**: When storing large amounts of data, the `datamanager` supports batch insertion, reducing the overhead associated with frequent database writes. This approach minimizes transaction times and optimizes the overall data storage process.\n- **Efficient Similarity Computation**: By using Locality-Sensitive Hashing (LSH) and MinHash, the module avoids the computational cost of pairwise comparisons in high-dimensional space, making similarity-based searches scalable even for large datasets.\n\nThese performance optimizations ensure that the `datamanager` remains responsive and efficient, providing a smooth experience for users dealing with large-scale optimization tasks.\n\nScalability and Data Volume Management\n**************************************\nAs the `datamanager` is intended for use with potentially large datasets generated by optimization tasks, scalability was a key focus during its design:\n\n- **Scalable Database Design**: The choice of SQLite as the initial database backend provides a lightweight and self-contained solution that is sufficient for many use cases. However, the design is abstract enough to allow for future migration to more powerful database systems like PostgreSQL if the need for greater scalability arises.\n- **Incremental Data Storage**: The ability to store data incrementally during optimization tasks ensures that the database grows in a controlled manner, preventing sudden spikes in storage requirements. This is especially important for long-running tasks that generate large volumes of data over time.\n- **Support for Data Archiving**: To prevent the database from becoming too large and unwieldy, the `datamanager` supports archiving of old or completed task data. This ensures that the active dataset remains manageable, while older records can be preserved externally if needed for historical analysis.\n\nThese design choices make the `datamanager` suitable for both small-scale experimentation and larger, more data-intensive optimization environments, adapting to the needs of different users.\n\nFlexibility in Data Formats and Querying\n****************************************\nFlexibility is a hallmark of the `datamanager`'s design, ensuring that it can adapt to various data types and query requirements across different optimization problems:\n\n- **Support for Multiple Data Formats**: The module supports the insertion and retrieval of data in various formats, including dictionaries, lists, pandas DataFrames, and NumPy arrays. This flexibility allows users to work with their preferred data structures without needing to conform to a rigid format.\n- **Customizable Search and Query Mechanisms**: Users have the freedom to perform custom SQL queries directly on the database, allowing for advanced data manipulation and retrieval. This is especially useful when standard search methods do not meet specific analysis requirements.\n- **Adaptable Metadata Design**: The `_metadata` table is designed to be extensible, allowing users to add new fields that are relevant to their particular optimization tasks. This ensures that the metadata stored for each task can evolve alongside the changing needs of the optimization process.\n\nThese aspects of flexibility make the `datamanager` an adaptable tool, suitable for a wide range of use cases and optimization frameworks.\n\nReliability and Data Integrity\n******************************\nEnsuring data integrity and reliability is critical when managing optimization task data. The `datamanager` includes several mechanisms to safeguard against data loss and ensure consistency:\n\n- **Atomic Transactions**: The use of atomic transactions in database operations ensures that data is written consistently, reducing the risk of data corruption during inserts, updates, or deletions.\n- **Checkpointing for Data Safety**: The checkpointing feature serves as a safeguard, allowing users to restore a task to a previous state if issues occur, ensuring that progress is not permanently lost.\n- **Data Validation**: Basic validation checks are performed before data is inserted into the database, ensuring that required fields are present and data types are correct. This prevents invalid data from entering the system and causing errors during later stages of optimization.\n\nWith these mechanisms in place, the `datamanager` is designed to maintain the reliability and integrity of the data it manages, ensuring that users can trust it as a robust data management solution for their optimization tasks.\n\nThe design considerations of the `datamanager` reflect a balance between flexibility, performance, and reliability, making it a well-rounded choice for managing the complex and evolving data needs of optimization tasks. Its modular structure, scalability, and focus on data integrity ensure that it can adapt to different challenges and provide consistent value in optimization scenarios.\n"
  },
  {
    "path": "docs/source/usage/problems.rst",
    "content": "Benchmark Problems\n==================\nThis\n\n.. admonition:: Overview\n   :class: info\n\n   - :ref:`Register <registering-new-problem>`: How to register a new optimization problem to :ref:`TransOPT <home>`\n   - :ref:`Synthetic Problem <synthetic-problems>`: The list of the synthetic problems available in :ref:`TransOPT <home>`\n   - :ref:`Hyperparameter Optimization Problem <hpo-problems>`: The list of the HPO problems available in :ref:`TransOPT <home>`\n   - :ref:`Configurable Software Optimization Problem <cso-problems>`: The list of the configurable software optimization problems available in :ref:`TransOPT <home>`\n   - :ref:`RNA Inverse Design Problem <rna-problems>`: The list of the RNA Inverse design problems available in :ref:`TransOPT <home>`\n   - :ref:`Protein Inverse Folding Problem <pif-problems>`: The list of the protein inverse folding problems available in :ref:`TransOPT <home>`\n   - :ref:`Parallelization <parallelization>`: How to parallelize function evaluations\n\n\n.. _registering-new-problem:\n\n\nRegistering a New Benchmark Problem\n-----------------------------------\n\nTo register a new benchmark problem in the TransOPT framework, follow the steps below.\n\nI. Import the Problem Registry\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFirst, you need to import the `problem_registry` from the `transopt.agent.registry` module:\n\n.. code-block:: python\n\n    from transopt.agent.registry import problem_registry\n\nII. Define a New Problem Class\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nNext, define a new problem class. This class should be decorated with the `@problem_registry.register(\"ProblemName\")` decorator, where `\"ProblemName\"` is the unique identifier for the problem. The new problem class must inherit from one of the following base classes:\n\n- `NonTabularProblem`\n- `TabularProblem`\n\nFor example, to create a new problem named \"new_problem\", you would define the class as follows:\n\n.. code-block:: python\n\n    @problem_registry.register(\"new_problem\")\n    class new_problem(NonTabularProblem):\n        pass  # Further implementation required\n\nIII. Implement Required Methods\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAfter defining the class, you need to implement the following three abstract methods:\n\n1. **get_configuration_space**: \n   This method is responsible for defining the configuration space of the new problem.\n\n   .. code-block:: python\n\n       def get_configuration_space(self):\n           # Define and return the configuration space\n           pass\n\n2. **get_fidelity_space**: \n   This method should define the fidelity space for the problem, if applicable.\n\n   .. code-block:: python\n\n       def get_fidelity_space(self):\n           # Define and return the fidelity space\n           pass\n\n3. **objective_function**: \n   This method evaluates the problem's objective function based on the provided configuration and other parameters.\n\n   .. code-block:: python\n\n       def objective_function(self, configuration, fidelity=None, seed=None, **kwargs) -> Dict:\n           # Evaluate the configuration and return the results as a dictionary\n           pass\n\nHere’s an example outline of the `sphere` class:\n\n.. code-block:: python\n\n    @problem_registry.register(\"sphere\")\n    class sphere(NonTabularProblem):\n        \n      def get_configuration_space(self):\n            # Define the configuration space here\n         variables =  [Continuous(f'x{i}', (-5.12, 5.12)) for i in range(self.input_dim)]\n         ss = SearchSpace(variables)\n         return ss\n        \n      def get_fidelity_space(self) -> FidelitySpace:\n         fs = FidelitySpace([])\n         return fs\n\n      def objective_function(self, configuration, fidelity=None, seed=None, **kwargs) -> Dict:\n         # Implement the evaluation logic and return the results as a dictionary\n         X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n         y = np.sum((X) ** 2, axis=1)\n         results = {'function_value': float(y)}\n\n         return results\n\nBy following these steps, you can successfully register a new benchmark problem in the TransOPT framework.\n\n.. _synthetic-problems:\n\nSynthetic Problem\n------------------\n\nThe synthetic problems in this section are widely used in the optimization literature for benchmarking optimization algorithms. These problems exhibit diverse characteristics and levels of complexity, making them ideal for testing the robustness and efficiency of different optimization strategies. Below is an overview of the synthetic problems included in this benchmark suite:\n\n- **Sphere:** A simple convex problem that is often used as a baseline. The global minimum is located at the origin, and the objective function value increases quadratically with distance from the origin.\n\n- **Rastrigin:** A non-convex problem characterized by a large number of local minima, making it challenging for optimization algorithms to find the global minimum.\n\n- **Schwefel:** Known for its complex landscape with many local minima, the Schwefel function requires optimization algorithms to balance exploration and exploitation effectively.\n\n- **Ackley:** A multi-modal function with a nearly flat outer region and a large hole at the center, making it difficult for algorithms to escape local minima and converge to the global minimum.\n\n- **Levy:** A multi-modal problem with a complex landscape that tests an algorithm's ability to handle irregularities and identify global optima.\n\n- **Griewank:** A function with many widespread local minima, making it challenging to converge to the global optimum. It is often used to assess the ability of algorithms to avoid getting trapped in local minima.\n\n- **Rosenbrock:** A non-convex problem with a narrow, curved valley that contains the global minimum. This function is commonly used to test the convergence properties of optimization algorithms.\n\n- **Dropwave:** A challenging multi-modal function with steep drops, requiring careful search strategies to avoid local minima.\n\n- **Langermann:** This problem has many local minima and a highly irregular structure, testing an algorithm's ability to explore complex search spaces.\n\n- **Rotated Hyper-Ellipsoid:** A rotated version of the ellipsoid function, which tests an algorithm's capability to optimize problems with rotated and ill-conditioned landscapes.\n\n- **Sum of Different Powers:** A problem where each term in the sum contributes differently to the overall objective, requiring optimization algorithms to handle varying sensitivities across dimensions.\n\n- **Styblinski-Tang:** A function with multiple global minima, commonly used to test an algorithm's ability to avoid suboptimal solutions.\n\n- **Powell:** A problem designed to challenge optimization algorithms with a mixture of convex and non-convex characteristics across different dimensions.\n\n- **Dixon-Price:** This function has a smooth, narrow valley leading to the global minimum, testing an algorithm’s ability to navigate such features.\n\n- **Ellipsoid:** A test problem that features high conditioning and elliptical level sets, requiring algorithms to efficiently search in skewed spaces.\n\n- **Discus:** A variant of the sphere function with a large difference in scale between the first variable and the rest, making it a test of handling unbalanced scales.\n\n- **BentCigar:** A highly anisotropic function where one direction has a much larger scale than the others, challenging algorithms to adjust their search strategies accordingly.\n\n- **SharpRidge:** This function has a sharp ridge along one dimension, testing an algorithm's ability to optimize in narrow, high-gradient regions.\n\n- **Katsuura:** A multi-fractal function that combines periodicity and complexity, testing the capability of algorithms to explore intricate landscapes.\n\n- **Weierstrass:** A problem with a fractal structure, characterized by a large number of local minima and requiring algorithms to handle varying scales of roughness.\n\n- **Different Powers:** A problem where each term contributes differently to the objective, challenging algorithms to manage varying sensitivities and scales.\n\n- **Trid:** A function that has a curved and ridge-like structure, often used to assess the convergence properties of optimization algorithms.\n\n- **LinearSlope:** A simple linear function with a varying slope across dimensions, used to test the basic exploration capabilities of optimization methods.\n\n- **Elliptic:** Similar to the Ellipsoid function but with exponentially increasing scales, testing an algorithm’s ability to search efficiently in poorly conditioned spaces.\n\n- **PERM:** A complex combinatorial problem that combines different power terms, testing an algorithm’s ability to handle permutation-based search spaces.\n\n- **Power Sum:** A problem where each dimension contributes a power sum to the objective, requiring algorithms to handle large variations in sensitivity across variables.\n\n- **Zakharov:** A problem with a complex, non-linear interaction between variables, used to test an algorithm’s ability to navigate multi-variable coupling.\n\n- **Six-Hump Camel:** A low-dimensional, multi-modal problem with several local minima, requiring precise search strategies to find the global optimum.\n\n- **Michalewicz:** A problem known for its challenging steepness and periodicity, making it difficult for algorithms to locate the global minimum.\n\n- **Moving Peak:** A dynamic optimization problem where the objective function changes over time, used to assess an algorithm’s adaptability to changing landscapes.\n\nThese problems collectively provide a comprehensive suite for evaluating optimization algorithms across a broad range of difficulties, including convexity, multi-modality, separability, and conditioning.\n\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n|      Problem name       |                                                                       Mathematical formulation                                                                        |              Range                       |                               |                             |\n+=========================+=======================================================================================================================================================================+==========================================+===============================+=============================+\n| Sphere                  | :math:`f(\\mathbf{x}) = \\sum_{i=1}^d x_i^2`                                                                                                                            | :math:`x_i \\in [-5.12, 5.12]`            |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Rastrigin               | :math:`f(\\mathbf{x}) = 10 d + \\sum_{i=1}^d \\left[ x_i^2 - 10 \\cos(2 \\pi x_i) \\right]`                                                                                 | :math:`x_i \\in [-32.768, 32.768]`        |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Schwefel                | :math:`f(\\mathbf{x}) = 418.9829 d - \\sum_{i=1}^d x_i \\sin\\left(\\sqrt{\\left|x_i\\right|}\\right)`                                                                        | :math:`x_i \\in [-500, 500]`              |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Ackley                  | :math:`f(\\mathbf{x}) = -a \\exp \\left(-b \\sqrt{\\frac{1}{d} \\sum_{i=1}^d x_i^2}\\right)`                                                                                 | :math:`x_i \\in [-32.768, 32.768]`        |                               |                             |\n|                         | :math:`-\\exp \\left(\\frac{1}{d} \\sum_{i=1}^d \\cos \\left(c x_i\\right)\\right) + a + \\exp(1)`                                                                             |                                          |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Levy                    | :math:`f(\\mathbf{x}) = \\sin^2\\left(\\pi w_1\\right) + \\sum_{i=1}^{d-1}\\left(w_i - 1\\right)^2`                                                                           | :math:`x_i \\in [-10, 10]`                |                               |                             |\n|                         | :math:`\\left[1 + 10 \\sin^2\\left(\\pi w_i + 1\\right)\\right] + \\left(w_d - 1\\right)^2`                                                                                   |                                          |                               |                             |\n|                         | :math:`\\left[1 + \\sin^2\\left(2 \\pi w_d\\right)\\right], w_i = 1 + \\frac{x_i - 1}{4}`                                                                                    |                                          |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Griewank                | :math:`f(\\mathbf{x}) = \\sum_{i=1}^d \\frac{x_i^2}{4000} - \\prod_{i=1}^d \\cos\\left(\\frac{x_i}{\\sqrt{i}}\\right) + 1`                                                     | :math:`x_i \\in [-600, 600]`              |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Rosenbrock              | :math:`f(\\mathbf{x}) = \\sum_{i=1}^{d-1}\\left[100\\left(x_{i+1} - x_i^2\\right)^2 + \\left(x_i - 1\\right)^2\\right]`                                                       | :math:`x_i \\in [-5, 10]`                 |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Dropwave                | :math:`f(\\mathbf{x}) = -\\frac{1 + \\cos\\left(12 \\sqrt{x_1^2 + x_2^2}\\right)}{0.5\\left(x_1^2 + x_2^2\\right) + 2}`                                                       | :math:`x_i \\in [-5.12, 5.12]`            |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Langermann              | :math:`f(\\mathbf{x}) = \\sum_{i=1}^m c_i \\exp\\left(-\\frac{1}{\\pi} \\sum_{j=1}^d \\left(x_j - A_{ij}\\right)^2\\right)`                                                     | :math:`x_i \\in [0, 10]`                  |                               |                             |\n|                         | :math:`\\cos\\left(\\pi \\sum_{j=1}^d\\left(x_j - A_{ij}\\right)^2\\right)`                                                                                                  |                                          |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Rotated Hyper-Ellipsoid | :math:`f(\\mathbf{x}) = \\sum_{i=1}^d \\sum_{j=1}^i x_j^2`                                                                                                               | :math:`x_i \\in [-65.536, 65.536]`        |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Sum of Different Powers | :math:`f(\\mathbf{x}) = \\sum_{i=1}^d x_i^{i+1}`                                                                                                                        | :math:`x_i \\in [-1, 1]`                  |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Styblinski-Tang         | :math:`f(\\mathbf{x}) = \\frac{1}{2} \\sum_{i=1}^d\\left(x_i^4 - 16 x_i^2 + 5 x_i\\right)`                                                                                 | :math:`x_i \\in [-5, 5]`                  |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Powell                  | :math:`f(\\mathbf{x}) = \\sum_{i=1}^{d/4}\\left(x_{4i-3} + 10 x_{4i-2}\\right)^2`                                                                                         | :math:`x_i \\in [-4, 5]`                  |                               |                             |\n|                         | :math:`+ 5\\left(x_{4i-1} - x_{4i}\\right)^2`                                                                                                                           |                                          |                               |                             |\n|                         | :math:`+ \\left(x_{4i-2} - 2 x_{4i-1}\\right)^4`                                                                                                                        |                                          |                               |                             |\n|                         | :math:`+ 10\\left(x_{4i-3} - x_{4i}\\right)^4`                                                                                                                          |                                          |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Dixon-Price             | :math:`f(\\mathbf{x}) = \\left(x_1 - 1\\right)^2 + \\sum_{i=2}^d i\\left(2 x_i^2 - x_{i-1}\\right)^2`                                                                       | :math:`x_i \\in [-10, 10]`                |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Ellipsoid               | :math:`f_2(\\mathbf{x}) = \\sum_{i=1}^D 10^{6 \\frac{i-1}{D-1}} z_i^2 + f_{\\mathrm{opt}}`                                                                                | :math:`x_i \\in [-5, 5]`                  |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Discus                  | :math:`f(\\mathbf{x}) = 10^6 x_1^2 + \\sum_{i=2}^D x_i^2`                                                                                                               | :math:`x_i \\in [-5, 5]`                  |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| BentCigar               | :math:`f(\\mathbf{x}) = x_1^2 + 10^6 \\sum_{i=2}^n x_i^2`                                                                                                               | :math:`x_i \\in [-5, 5]`                  |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| SharpRidge              | :math:`f(\\mathbf{x}) = x_1^2 + 100 \\sqrt{\\sum_{i=2}^D x_i^2}`                                                                                                         | :math:`x_i \\in [-5, 5]`                  |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Katsuura                | :math:`f(\\mathbf{x}) = \\frac{10}{D^2} \\prod_{i=1}^D \\left(1 + i \\sum_{j=1}^{32} \\frac{2^j x_i - \\left[2^j x_i\\right]}{2^j}\\right)^{10 / D^{1.2}}`                     | :math:`x_i \\in [-5, 5]`                  |                               |                             |\n|                         | :math:`- \\frac{10}{D^2} + f_{\\mathrm{pen}}(\\mathbf{x})`                                                                                                               |                                          |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Weierstrass             | :math:`f_{16}(\\mathbf{x}) = 10 \\left(\\frac{1}{D} \\sum_{i=1}^D \\sum_{k=0}^{11} \\frac{1}{2^k} \\cos \\left(2 \\pi 3^k\\left(z_i + \\frac{1}{2}\\right)\\right) - f_0\\right)^3` | :math:`x_i \\in [-5, 5]`                  |                               |                             |\n|                         | :math:`+ \\frac{10}{D} f_{\\mathrm{pen}}(\\mathbf{x})`                                                                                                                   |                                          |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| DifferentPowers         | :math:`f(\\mathbf{x}) = \\sqrt{\\sum_{i=1}^D x_i^{2 + 4 \\frac{i-1}{D-1}}}`                                                                                               | :math:`x_i \\in [-5, 5]`                  |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Trid                    | :math:`f(\\mathbf{x}) = \\sum_{i=1}^d \\left(x_i - 1\\right)^2 - \\sum_{i=2}^d x_i x_{i-1}`                                                                                | :math:`x_i \\in [-d^2, d^2]`              |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| LinearSlope             | :math:`f(\\mathbf{x}) = \\sum_{i=1}^D 5 s_i - s_i x_i`                                                                                                                  | :math:`x_i \\in [-5, 5]`                  |                               |                             |\n|                         | :math:`s_i = \\operatorname{sign}\\left(x_i^{\\mathrm{opt}}\\right) 10^{\\frac{i-1}{D-1}},`                                                                                |                                          |                               |                             |\n|                         | :math:`\\text{for } i=1, \\ldots, D`                                                                                                                                    |                                          |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Elliptic                | :math:`f(\\mathbf{x}) = \\sum_{i=1}^D \\left(10^6\\right)^{\\frac{i-1}{D-1}} x_i^2`                                                                                        | :math:`x_i \\in [-5, 5]`                  |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| PERM                    | :math:`f(\\mathbf{x}) = \\sum_{i=1}^d \\left(\\sum_{j=1}^d \\left(j + \\beta\\right)\\left(x_j^i - \\frac{1}{j^i}\\right)\\right)^2`                                             | :math:`x_i \\in [-d, d]`                  |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Power Sum               | :math:`f(\\mathbf{x}) = \\sum_{i=1}^d \\left[\\left(\\sum_{j=1}^d x_j^i\\right) - b_i\\right]^2`                                                                             | :math:`x_i \\in [0, d]`                   |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Zakharov                | :math:`f(\\mathbf{x}) = \\sum_{i=1}^d x_i^2 + \\left(\\sum_{i=1}^d 0.5 i x_i\\right)^2`                                                                                    | :math:`x_i \\in [-5, 10]`                 |                               |                             |\n|                         | :math:`+ \\left(\\sum_{i=1}^d 0.5 i x_i\\right)^4`                                                                                                                       |                                          |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Six-Hump Camel          | :math:`f(\\mathbf{x}) = \\left(4 - 2.1 x_1^2 + \\frac{x_1^4}{3}\\right) x_1^2 + x_1 x_2`                                                                                  | :math:`x_1 \\in [-3, 3], x_2 \\in [-2, 2]` |                               |                             |\n|                         | :math:`+ \\left(-4 + 4 x_2^2\\right) x_2^2`                                                                                                                             |                                          |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Michalewicz             | :math:`f(\\mathbf{x}) = -\\sum_{i=1}^d \\sin \\left(x_i\\right) \\sin ^{2 m}\\left(\\frac{i x_i^2}{\\pi}\\right)`                                                               | :math:`x_i \\in [0, \\pi]`                 |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| Moving Peak             | :math:`f(\\mathbf{x}) = \\sum_{i=1}^D \\left(10^6\\right)^{\\frac{i-1}{D-1}} x_i^2`                                                                                        | :math:`x_i \\in [0, 100]`                 |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n| PERM 2                  | :math:`f(\\mathbf{x}) = \\sum_{i=1}^d\\left(\\sum_{j=1}^d\\left(j^i+\\beta\\right)\\left(\\left(\\frac{x_j}{j}\\right)^i-1\\right)\\right)^2`                                      | :math:`x_i \\in [-d, d]`                  |                               |                             |\n+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------+-------------------------------+-----------------------------+\n\n\n.. _hpo-problems:\n\nHyperparameter Optimization Problem\n------------------------------------\n\nThis section provides an overview of the hyperparameter optimization problem including the hyperparameters used for various machine learning models and machine learning tasks used for generate problem instances.\n\nHyperparameters for Support Vector Machine (SVM)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nSupport Vector Machines (SVM) are widely used for classification and regression tasks. They are particularly effective in high-dimensional spaces and situations where the number of dimensions exceeds the number of samples. The hyperparameters for SVM control the regularization and the kernel function, which are crucial for model performance.\n\n+--------------------+-------------------+------------+\n| **Hyperparameter** |     **Range**     |  **Type**  |\n+====================+===================+============+\n| C                  | :math:`[-10, 10]` | Continuous |\n+--------------------+-------------------+------------+\n| gamma              | :math:`[-10, 10]` | Continuous |\n+--------------------+-------------------+------------+\n\nHyperparameters for AdaBoost\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAdaBoost is a popular ensemble method that combines multiple weak learners to create a strong classifier. It is particularly useful for boosting the performance of decision trees. The hyperparameters control the number of estimators and the learning rate, which affects the contribution of each classifier.\n\n+--------------------+-------------------+------------+\n| **Hyperparameter** |     **Range**     |  **Type**  |\n+====================+===================+============+\n| n_estimators       | :math:`[1, 100]`  | Integer    |\n+--------------------+-------------------+------------+\n| learning_rate      | :math:`[0.01, 1]` | Continuous |\n+--------------------+-------------------+------------+\n\nHyperparameters for Random Forest\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nRandom Forest is an ensemble learning method that builds multiple decision trees and merges them to get a more accurate and stable prediction. It is widely used for both classification and regression tasks. The hyperparameters include the number of trees, the depth of the trees, and various criteria for splitting nodes.\n\n+--------------------------+--------------------+-------------+\n|    **Hyperparameter**    |     **Range**      |  **Type**   |\n+==========================+====================+=============+\n| n_estimators             | :math:`[1, 1000]`  | Integer     |\n+--------------------------+--------------------+-------------+\n| max_depth                | :math:`[1, 100]`   | Integer     |\n+--------------------------+--------------------+-------------+\n| criterion                | {gini, entropy}    | Categorical |\n+--------------------------+--------------------+-------------+\n| min_samples_leaf         | :math:`[1, 20]`    | Integer     |\n+--------------------------+--------------------+-------------+\n| min_weight_fraction_leaf | :math:`[0.0, 0.5]` | Continuous  |\n+--------------------------+--------------------+-------------+\n| min_impurity_decrease    | :math:`[0.0, 1.0]` | Continuous  |\n+--------------------------+--------------------+-------------+\n\nHyperparameters for XGBoost\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nXGBoost is an efficient and scalable implementation of gradient boosting, designed for speed and performance. It is widely used in machine learning competitions and industry for classification and regression tasks. The hyperparameters include learning rates, tree depths, and regularization parameters, which control the complexity of the model and its ability to generalize.\n\n+--------------------+-----------------------+------------+\n| **Hyperparameter** |       **Range**       |  **Type**  |\n+====================+=======================+============+\n| eta                | :math:`[-10.0, 0.0]`  | Continuous |\n+--------------------+-----------------------+------------+\n| max_depth          | :math:`[1, 15]`       | Integer    |\n+--------------------+-----------------------+------------+\n| min_child_weight   | :math:`[0.0, 7.0]`    | Continuous |\n+--------------------+-----------------------+------------+\n| colsample_bytree   | :math:`[0.01, 1.0]`   | Continuous |\n+--------------------+-----------------------+------------+\n| colsample_bylevel  | :math:`[0.01, 1.0]`   | Continuous |\n+--------------------+-----------------------+------------+\n| reg_lambda         | :math:`[-10.0, 10.0]` | Continuous |\n+--------------------+-----------------------+------------+\n| reg_alpha          | :math:`[-10.0, 10.0]` | Continuous |\n+--------------------+-----------------------+------------+\n| subsample_per_it   | :math:`[0.1, 1.0]`    | Continuous |\n+--------------------+-----------------------+------------+\n| n_estimators       | :math:`[1, 50]`       | Integer    |\n+--------------------+-----------------------+------------+\n| gamma              | :math:`[0.0, 1.0]`    | Continuous |\n+--------------------+-----------------------+------------+\n\nHyperparameters for GLMNet\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nGLMNet is a regularized regression model that supports both LASSO and ridge regression. It is particularly useful for high-dimensional datasets where regularization is necessary to prevent overfitting. The hyperparameters control the strength of the regularization and the balance between L1 and L2 penalties.\n\n+--------------------+---------------------------+-------------+\n| **Hyperparameter** |         **Range**         |  **Type**   |\n+====================+===========================+=============+\n| lambda             | :math:`[0, 10^5]`         | Log-integer |\n+--------------------+---------------------------+-------------+\n| alpha              | :math:`[0, 1]`            | Continuous  |\n+--------------------+---------------------------+-------------+\n| nlambda            | :math:`[1, 100]`          | Integer     |\n+--------------------+---------------------------+-------------+\n\nHyperparameters for AlexNet\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAlexNet is a convolutional neural network (CNN) architecture that revolutionized the field of computer vision by achieving significant improvements on the ImageNet dataset. The hyperparameters include learning rate, dropout rate, weight decay, and the choice of activation function, all of which are crucial for training deep neural networks.\n\n+---------------------+----------------------------+-------------+\n| **Hyperparameter**  |         **Range**          |  **Type**   |\n+=====================+============================+=============+\n| learning_rate       | :math:`[10^{-5}, 10^{-1}]` | Continuous  |\n+---------------------+----------------------------+-------------+\n| dropout_rate        | :math:`[0.0, 0.5]`         | Continuous  |\n+---------------------+----------------------------+-------------+\n| weight_decay        | :math:`[10^{-5}, 10^{-2}]` | Continuous  |\n+---------------------+----------------------------+-------------+\n| activation_function | {ReLU, Leaky ReLU, ELU}    | Categorical |\n+---------------------+----------------------------+-------------+\n\nHyperparameters for 2-Layer Bayesian Neural Network (BNN)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nBayesian Neural Networks (BNNs) provide a probabilistic interpretation of deep learning models by introducing uncertainty in the weights. This allows BNNs to express model uncertainty, which is crucial for tasks where uncertainty quantification is important. The hyperparameters include layer sizes, step length, burn-in period, and momentum decay.\n\n+--------------------+----------------------------+----------------+\n| **Hyperparameter** |         **Range**          |    **Type**    |\n+====================+============================+================+\n| layer 1            | :math:`[2^4, 2^9]`         | Log-integer    |\n+--------------------+----------------------------+----------------+\n| layer 2            | :math:`[2^4, 2^9]`         | Log-integer    |\n+--------------------+----------------------------+----------------+\n| step_length        | :math:`[10^{-6}, 10^{-1}]` | Log-continuous |\n+--------------------+----------------------------+----------------+\n| burn_in            | :math:`[0, 8]`             | Integer        |\n+--------------------+----------------------------+----------------+\n| momentum_decay     | :math:`[0, 1]`             | Log-continuous |\n+--------------------+----------------------------+----------------+\n\nHyperparameters for CNNs\n^^^^^^^^^^^^^^^^^^^^^^^^\n\nConvolutional Neural Networks (CNNs) are the backbone of most modern computer vision systems. They are designed to automatically and adaptively learn spatial hierarchies of features through backpropagation. The hyperparameters include learning rate, momentum, regularization parameter, dropout rate, and activation function.\n\n+--------------------------+----------------------------+-------------+\n|    **Hyperparameter**    |         **Range**          |  **Type**   |\n+==========================+============================+=============+\n| learning_rate            | :math:`[10^{-6}, 10^{-1}]` | Continuous  |\n+--------------------------+----------------------------+-------------+\n| momentum                 | :math:`[0.0, 0.9]`         | Continuous  |\n+--------------------------+----------------------------+-------------+\n| regularization_parameter | :math:`[10^{-6}, 10^{-2}]` | Continuous  |\n+--------------------------+----------------------------+-------------+\n| dropout_rate             | :math:`[0, 0.5]`           | Continuous  |\n+--------------------------+----------------------------+-------------+\n| activation_function      | {ReLU, Leaky ReLU, Tanh,   | Categorical |\n|                          | Sigmoid}                   |             |\n+--------------------------+----------------------------+-------------+\n\nHyperparameters for ResNet18\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nResNet18 is a residual network architecture that introduced the concept of residual connections, allowing for the training of very deep networks by mitigating the vanishing gradient problem. The hyperparameters include learning rate, momentum, dropout rate, and weight decay.\n\n+--------------------+----------------------------+------------+\n| **Hyperparameter** |         **Range**          |  **Type**  |\n+====================+============================+============+\n| learning_rate      | :math:`[10^{-5}, 10^{-1}]` | Continuous |\n+--------------------+----------------------------+------------+\n| momentum           | :math:`[0, 1]`             | Continuous |\n+--------------------+----------------------------+------------+\n| dropout_rate       | :math:`[0, 0.5]`           | Continuous |\n+--------------------+----------------------------+------------+\n| weight_decay       | :math:`[10^{-5}, 10^{-2}]` | Continuous |\n+--------------------+----------------------------+------------+\n\nHyperparameters for DenseNet\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nDenseNet is a densely connected convolutional network that connects each layer to every other layer in a feed-forward fashion. This architecture improves the flow of information and gradients throughout the network, making it easier to train. The hyperparameters include learning rate, momentum, dropout rate, and weight decay.\n\n+--------------------+----------------------------+------------+\n| **Hyperparameter** |         **Range**          |  **Type**  |\n+====================+============================+============+\n| learning_rate      | :math:`[2^3, 2^8]`         | Integer    |\n+--------------------+----------------------------+------------+\n| momentum           | :math:`[0, 1]`             | Continuous |\n+--------------------+----------------------------+------------+\n| dropout_rate       | :math:`[0, 0.5]`           | Continuous |\n+--------------------+----------------------------+------------+\n| weight_decay       | :math:`[10^{-5}, 10^{-1}]` | Continuous |\n+--------------------+----------------------------+------------+\n\nMachine Learning Tasks\n^^^^^^^^^^^^^^^^^^^^^^\n\nThis section lists the various sources of machine learning tasks used for hyperparameter optimization, including classification and regression problems. These datasets are widely recognized in the machine learning community and are used for benchmarking algorithms.\n\n+--------------------------------------------------------+---------------------------+------------+---------+\n|                       **Source**                       |         **Type**          | **Number** | **IDs** |\n+========================================================+===========================+============+=========+\n| `OpenML-CC18 <https://www.openml.org/s/99>`_           | Classification            | 78         | 1-78    |\n+--------------------------------------------------------+---------------------------+------------+---------+\n| `UC Irvine Repository <https://archive.ics.uci.edu/>`_ | Classification/Regression | 10         | 79-88   |\n+--------------------------------------------------------+---------------------------+------------+---------+\n| `NAS-Bench-360 <https://archive.ics.uci.edu/>`_        | Classification/Regression | 5          | 89-93   |\n+--------------------------------------------------------+---------------------------+------------+---------+\n| `NATS-Bench <https://github.com/D-X-Y/NATS-Bench>`_    | Classification            | 3          | 94-96   |\n+--------------------------------------------------------+---------------------------+------------+---------+\n| `SVHN <https://github.com/D-X-Y/NATS-Bench>`_          | Classification            | 1          | 97      |\n+--------------------------------------------------------+---------------------------+------------+---------+\n\n\n.. _cso-problems:\n\nConfigurable Software Optimization Problem\n------------------------------------------\n\nThis section provides a summary of the configurable software optimization (CSO) tasks, which involve optimizing various software systems. The tasks are characterized by the number of variables, objectives, and workloads, along with the sources of these workloads.\n\n+-------------------+---------------+----------------+---------------+------------------------------------------------------------------------------------------------------------------------------------------+\n| **Software Name** | **Variables** | **Objectives** | **Workloads** |                                                           **Workloads Source**                                                           |\n+===================+===============+================+===============+==========================================================================================================================================+\n| LLVM              | 93            | 8              | 50            | `PolyBench <https://web.cs.ucla.edu/~pouchet/software/polybench/>`_, `mibench <https://github.com/embecosm/mibench?tab=readme-ov-file>`_ |\n+-------------------+---------------+----------------+---------------+------------------------------------------------------------------------------------------------------------------------------------------+\n| GCC               | 105           | 8              | 50            | `PolyBench <https://web.cs.ucla.edu/~pouchet/software/polybench/>`_, `mibench <https://github.com/embecosm/mibench?tab=readme-ov-file>`_ |\n+-------------------+---------------+----------------+---------------+------------------------------------------------------------------------------------------------------------------------------------------+\n| Mysql             | 28            | 14             | 18            | `benchbase <https://github.com/cmu-db/benchbase.git>`_, `sysbench <https://github.com/akopytov/sysbench>`_                               |\n+-------------------+---------------+----------------+---------------+------------------------------------------------------------------------------------------------------------------------------------------+\n| Hadoop            | 206           | 1              | 29            | `HiBench <https://github.com/Intel-bigdata/HiBench>`_                                                                                    |\n+-------------------+---------------+----------------+---------------+------------------------------------------------------------------------------------------------------------------------------------------+\n\n.. _rna-problems:\n\nRNA Inverse Design Problem\n---------------------------\n\nRNA inverse design involves designing RNA sequences that fold into specific secondary structures. This task is crucial for understanding and manipulating RNA function in various biological processes. The datasets listed here are commonly used benchmarks for RNA design algorithms.\n\n+---------------------------------------------------------------------+-------------------------+-------------+\n|                             **Source**                              | **Min-Max Length (nt)** | **Samples** |\n+=====================================================================+=========================+=============+\n| `Eterna100 <https://github.com/eternagame/eterna100-benchmarking>`_ | 11-399                  | 100         |\n+---------------------------------------------------------------------+-------------------------+-------------+\n| `Rfam-learn test <https://rfam.org/>`_                              | 50-446                  | 100         |\n+---------------------------------------------------------------------+-------------------------+-------------+\n| `RNA-Strand <http://www.rnasoft.ca/strand/>`_                       | 4-4381                  | 50          |\n+---------------------------------------------------------------------+-------------------------+-------------+\n| `RNAStralign <https://github.com/D-X-Y/NATS-Bench>`_                | 30-1851                 | 37149       |\n+---------------------------------------------------------------------+-------------------------+-------------+\n| `ArchiveII <https://github.com/D-X-Y/NATS-Bench>`_                  | 28-2968                 | 2975        |\n+---------------------------------------------------------------------+-------------------------+-------------+\n\n\n.. _pif-problems:\n\nProtein Inverse Folding Problem\n--------------------------------\n\nProtein Inverse Folding involves creating new amino acids sequence folding into desiered backbone structure. These problems are essential for applications in drug design, biotechnology, and synthetic biology. The datasets listed here are widely used in protein inverse folding research.\n\n+--------------------------------------------------------+-----------------------------+-------------+\n|                       **Source**                       |          **Type**           | **Numbers** |\n+========================================================+=============================+=============+\n| `Absolute <https://github.com/csi-greifflab/Absolut>`_ | Antibody design             | 159         |\n+--------------------------------------------------------+-----------------------------+-------------+\n| `CATH <https://www.cathdb.info/>`_                     | Single-chain protein design | 19752       |\n+--------------------------------------------------------+-----------------------------+-------------+\n| `Protein Data Bank <https://www.rcsb.org/>`_           | Multi-chain protein design  | 26361       |\n+--------------------------------------------------------+-----------------------------+-------------+\n\n.. _parallelization:\n\nParallelization\n---------------\n\nTo-do"
  },
  {
    "path": "docs/source/usage/results.rst",
    "content": "Results Analysis\n================\n\n\n.. admonition:: Overview\n   :class: info\n\n   - :ref:`Register a New Results Analysis Method <registering-new-analysis>`: How to add a new results analysis method to :ref:`TransOPT <home>`.\n   - :ref:`Customization Analysis Pipline<customization>`: How to customize your own results analysis pipline or add your own analysis method into the pipline.\n   - :ref:`Performance Evaluation Metrics <performance-evaluation-metrics>`: The list of the performance evaluation metrics available in :ref:`TransOPT <home>`\n   - :ref:`Statistical Measures <statistical-measures>`: The list of the statistical measures supportede in :ref:`TransOPT <home>`\n\n\n\n.. _registering-new-analysis:\n\nRegister a New Results Analysis Method\n--------------------------------------\n\n\n.. _customization:\n\n\nCustomization Analysis Pipline\n------------------------------\n\n\n.. _performance-evaluation-metrics:\n\nList of Performance Evaluation Metrics\n--------------------------------------\n\nFor each type of task instance, the framework offers performance evaluation metrics to assess the quality of the solutions generated by the algorithms. The metrics are categorized based on the type of task and are designed to evaluate various aspects of the solutions. The tables below summarize the performance metrics available for different tasks.\n\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|         **Task**         |     **Metric**     |                       **Description**                        | **Scale** |   **Type**   |\n+==========================+====================+==============================================================+===========+==============+\n| **Synthetic**            | Absolute Error     | The difference between the min value and the optimal         | [0, ∞]    | Minimization |\n|                          |                    | solution.                                                    |           |              |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n| **HPO (Classification)** | F1 Score           | The mean of precision and recall, providing a balanced       | [0, 1]    | Maximization |\n|                          |                    | measure of accuracy.                                         |           |              |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | Area Under Curve   | The area under the receiver operating characteristic         | [0, 1]    | Maximization |\n|                          |                    | (ROC) curve, quantifying the overall ability of a classifier |           |              |\n|                          |                    | to discriminate between positive and negative instances.     |           |              |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n| **HPO (Regression)**     | RMSE               | Root mean squared error (RMSE) measures the average          | [0, ∞]    | Minimization |\n|                          |                    | magnitude of the differences between predicted values and    |           |              |\n|                          |                    | actual values.                                               |           |              |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | MAE                | Mean absolute error (MAE) measures the average absolute      | [0, ∞]    | Minimization |\n|                          |                    | differences between predicted values and actual values.      |           |              |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n| **Protein Design**       | Binding Affinity   | The strength of the interaction between a protein and its    | [-∞, 0]   | Minimization |\n|                          |                    | ligand, typically measured by the equilibrium dissociation   |           |              |\n|                          |                    | constant.                                                    |           |              |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n| **RNA Inverse Design**   | GC-content         | The percentage of guanine (G) and cytosine (C) bases in a    | [0, 1]    | Maximization |\n|                          |                    | DNA or RNA molecule, which affects the stability and         |           |              |\n|                          |                    | melting temperature.                                         |           |              |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n| **LLVM/GCC**             | Avg Execution Time | The average execution time of multiple runs.                 | [0, ∞]    | Minimization |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | Compilation Time   | The time required to compile the code.                       | [0, ∞]    | Minimization |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | File Size          | The size of the executable file generated after compilation. | [0, ∞]    | Minimization |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | Max RSS            | The maximum resident set size used during execution.         | [0, ∞]    | Minimization |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | PAPI TOT CYC       | The total number of CPU cycles consumed during execution.    | [0, ∞]    | Minimization |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | PAPI TOT INS       | The total number of instructions executed by the CPU.        | [0, ∞]    | Minimization |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | PAPI BR MSP        | The number of times the CPU mispredicted branch directions.  | [0, ∞]    | Minimization |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | PAPI BR PRC        | The number of times the CPU correctly predicted branch       | [0, ∞]    | Minimization |\n|                          |                    | directions.                                                  |           |              |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | PAPI BR CN         | The number of conditional branch instructions.               | [0, ∞]    | Minimization |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | PAPI MEM WCY       | The number of cycles spent waiting for memory access.        | [0, ∞]    | Minimization |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n| **MySQL**                | Throughput         | The number of transactions processed per unit of time.       | [0, ∞]    | Maximization |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | Latency            | The time required to complete a single transaction from      | [0, ∞]    | Minimization |\n|                          |                    | initiation to completion.                                    |           |              |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | CPU Usage          | The proportion of CPU resources used during database         | [0, ∞]    | Minimization |\n|                          |                    | operations.                                                  |           |              |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n|                          | Memory Usage       | The amount of memory resources used during database          | [0, ∞]    | Minimization |\n|                          |                    | operations.                                                  |           |              |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n| **Hadoop**               | Execution Time     | The execution time of a big data task.                       | [0, ∞]    | Minimization |\n+--------------------------+--------------------+--------------------------------------------------------------+-----------+--------------+\n\n\n.. _statistical-measures:\n\n\nStatistical Measures\n--------------------\n\n\nThis section provides detailed explanations of the statistical methods used for analyzing the performance of different algorithms. Each method is accompanied by the relevant formulas and calculation procedures.\n\nWilcoxon Signed-Rank Test\n--------------------------\n\nThe **Wilcoxon signed-rank test** is a non-parametric statistical test used to compare two paired samples. Unlike the paired t-test, the Wilcoxon signed-rank test does not assume that the differences between pairs are normally distributed. It is particularly useful when dealing with small sample sizes or non-normally distributed data.\n\nGiven two related samples :math:`X` and :math:`Y`, the steps to perform the Wilcoxon signed-rank test are:\n\n1. **Compute the differences** between each pair of observations: :math:`d_i = X_i - Y_i`.\n2. **Rank the absolute values** of the differences, assigning ranks from the smallest to the largest difference.\n3. **Assign signs** to the ranks based on the sign of the original differences :math:`d_i`.\n4. **Calculate the test statistic** :math:`W`, which is the sum of the ranks corresponding to the positive differences:\n\n   .. math::\n\n      W = \\sum_{d_i > 0} \\text{Rank}(d_i)\n\n5. Compare the computed test statistic :math:`W` against the critical value from the Wilcoxon signed-rank table or calculate the p-value to determine the significance of the result.\n\nScott-Knott Test\n----------------\n\nThe **Scott-Knott test** is a statistical method used to rank the performance of different techniques across multiple runs on each benchmark instance. It is particularly effective in scenarios where multiple comparisons are being made, and it controls the family-wise error rate.\n\nThe procedure involves:\n\n1. **Partitioning the data**: Initially, all techniques are considered in one group. The group is then split into two subgroups if the mean difference between them is statistically significant.\n2. **Calculating the mean difference** between the groups using an appropriate test (e.g., ANOVA or t-test).\n3. **Assigning ranks**: If a significant difference is found, the techniques are ranked within their respective subgroups. If no significant difference is found, the techniques are considered to be in the same rank.\n4. **Repeating the process** until no further significant splits can be made.\n\nThe Scott-Knott test is particularly useful for determining the relative performance of multiple techniques, providing a clear ranking based on statistically significant differences.\n\nA12 Effect Size\n---------------\n\nThe **A12 effect size** is a non-parametric measure used to evaluate the probability that one algorithm outperforms another. It is particularly useful in understanding whether observed differences are practically significant, beyond just being statistically significant.\n\nThe A12 statistic is calculated as follows:\n\n1. Let :math:`A` and :math:`B` be the two sets of performance measures for two algorithms.\n2. **Calculate the A12 statistic**:\n\n   .. math::\n\n      A_{12} = \\frac{\\sum_{x \\in A} \\sum_{y \\in B} \\mathbf{I}(x > y) + 0.5 \\cdot \\mathbf{I}(x = y)}{|A| \\cdot |B|}\n\n\nCritical Difference (CD)\n------------------------\n\nThe **Critical Difference (CD)** is a statistical measure used to assess whether performance differences between algorithms are derived from randomness. It is typically used in conjunction with methods like the Friedman test or Nemenyi post-hoc test to evaluate multiple algorithms across multiple datasets.\n\nThe steps involved in calculating the Critical Difference are:\n\n1. **Perform a Friedman test** to rank the algorithms for each dataset.\n2. **Calculate the average ranks** for each algorithm across all datasets.\n3. **Compute the Critical Difference (CD)** using the following formula:\n\n   .. math::\n\n      \\text{CD} = q_{\\alpha} \\sqrt{\\frac{k(k+1)}{6N}}\n\n   where:\n   - :math:`q_{\\alpha}` is the critical value for a given significance level :math:`\\alpha` from the studentized range statistic.\n   - :math:`k` is the number of algorithms.\n   - :math:`N` is the number of datasets.\n\n\n4. If the difference in average ranks between two algorithms exceeds the CD, the performance difference is considered statistically significant, and not due to random variation.\n\nThese statistical methods provide robust tools for comparing algorithm performance across various benchmarks, ensuring that conclusions drawn are both statistically and practically significant.\n\n"
  },
  {
    "path": "docs/source/usage/visualization.rst",
    "content": "Visualization\n===============\n\nThis section demonstrates various visualization techniques used in TransOPT.\n\nData Filtering and Statistical Visualization\n--------------------------------------------\n\nThis section demonstrates how to filter data into multiple groups based on different conditions, perform statistical analysis on these groups, and visualize the results using box plots and trajectory plots.\n\n.. figure:: /_static/figures/visualization/filter.jpeg\n   :alt: Data Filtering Process\n   :width: 100%\n   :align: center\n\n   Figure 1: Four groups with different surrogate model. \n\nThe above figure illustrates the process of filtering data into multiple groups based on different conditions. This visual representation helps to understand how the data is segmented and analyzed in our visualization approach.\n\nKey steps in the data filtering process:\n\n1. Click + to add a new filter group.\n2. Define filter conditions for each group.\n3. Apply filters and generate visualizations (e.g., box plots, trajectory plots) for each group\n\n\nVisualization of Filtered Data\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAfter filtering the data into groups, TransOPT provides two main types of visualizations to compare and analyze the results: trajectory plots and box plots.\n\nTrajectory Plot\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\nThe trajectory plot shows the performance of different groups over time or iterations.\n\n.. figure:: /_static/figures/visualization/traj_compare.jpg\n   :alt: Trajectory Plot of Different Groups\n   :width: 50%\n   :align: center\n\n   Figure 2: Trajectory plot comparing performance of different surrogate model groups over iterations.\n\nThis plot allows you to:\n\n- Compare the convergence rates of different groups\n- Identify which group performs better at different stages of the optimization process\n- Observe any significant differences in performance trends among the groups\n\nBox Plot\n\"\"\"\"\"\"\"\"\n\nThe box plot provides a statistical summary of the performance distribution for each group.\n\n.. figure:: /_static/figures/visualization/box_compare.jpg\n   :alt: Box Plot of Different Groups\n   :width: 50%\n   :align: center\n\n   Figure 3: Box plot showing performance distribution of different surrogate model groups.\n\nKey insights from the box plot:\n\n- Median performance of each group\n- Spread of performance within each group\n- Presence of any outliers\n- Easy comparison of performance distributions across groups\n\n\n\n\nAnalysis of Individual Datasets\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nTransOPT also provides tools for in-depth analysis of individual datasets. This section outlines the process and visualizations available for single dataset analysis.\n\n1. Dataset Selection\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\nThe first step is to select a specific dataset for analysis. Once selected, TransOPT generates a summary of the dataset's key information.\n\n.. figure:: /_static/figures/visualization/choose.jpg\n   :alt: Dataset Information Summary\n   :width: 100%\n   :align: center\n\n   Figure 4: Summary of selected dataset information, including algorithm modules used and optimization problem details.\n\n\n2. Trajectory Plot\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\nThe trajectory plot for the selected dataset shows the optimization performance over time or iterations.\n\n.. figure:: /_static/figures/visualization/traj_solo.jpg\n   :alt: Trajectory Plot of Single Dataset\n   :width: 50%\n   :align: center\n\n   Figure 5: Trajectory plot showing the optimization performance for the selected dataset.\n\nThis visualization allows users to:\n\n- Observe the convergence behavior of the optimization process\n- Identify any plateaus or sudden improvements in performance\n- Assess the overall efficiency of the optimization algorithm for this specific dataset\n\n3. Variable Importance\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\nThe variable importance plot highlights which features or parameters had the most significant impact on the optimization outcome.\n\n.. figure:: /_static/figures/visualization/importance.jpg\n   :alt: Variable Importance Plot\n   :width: 50%\n   :align: center\n\n   Figure 6: Variable importance plot showing the relative impact of different features or parameters.\n\nThis visualization helps users:\n\n- Identify the most influential variables in the optimization process\n- Understand which parameters might require more careful tuning\n- Gain insights into the underlying structure of the optimization problem\n\n4. Dimensionality Reduction Plot\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\nThe dimensionality reduction plot provides a 2D representation of the high-dimensional sampling data, typically using techniques like PCA or t-SNE.\n\n.. figure:: /_static/figures/visualization/footprint.jpg\n   :alt: Dimensionality Reduction Plot\n   :width: 50%\n   :align: center\n\n   Figure 7: 2D plot of the sampled data after dimensionality reduction.\n\nThis visualization allows users to:\n\n- Observe clusters or patterns in the sampling data\n- Identify regions of the search space that were more heavily explored\n- Gain intuition about the structure of the optimization landscape\n\n\n"
  },
  {
    "path": "extra_requirements/analysis.json",
    "content": "{\n    \"analysis\": [\"pandas\", \"tikzplotlib\", \"pdf2image\", \"seaborn\", \"Pillow\"] \n}"
  },
  {
    "path": "extra_requirements/remote.json",
    "content": "{\n    \"remote\": [\"flask\", \"requests\", \"celery\"]\n}"
  },
  {
    "path": "requirements.txt",
    "content": "scipy>=1.4.1\nnumpy>=1.18.1\nConfigSpace>=0.4.12\nscikit-learn\nopenml\nmatplotlib\ntorch\ntorchvision\ngpytorch\nGPyOpt\ngym\nsobol-seq\nxgboost\nparamz\nemukit\npymoo\njax\ngplearn\noslo.concurrency>=4.2.0\ngit+https://github.com/SheffieldML/GPy.git@devel\nmmh3\nrich\ntqdm\nwilds\npyro-ppl\nbohb-hpo\nHEBO\ngit+https://github.com/RobustBench/robustbench.git\nhyperopt\nflask_cors\nopenai\n\n# Analysis\npandas\ntikzplotlib\npdf2image\nseaborn\nPillow\nnetworkx\n    \n# Remote\nflask\nrequests\ncelery"
  },
  {
    "path": "resources/docker/absolut_image/Dockerfile",
    "content": "FROM ubuntu:latest\n\nRUN apt-get update && \\\n    apt-get install -y git wget unzip build-essential\n\nENV INSTALL_DIR=/usr/local/Absolut\nENV TEMP_DIR=/root/Absolut_temp\nENV REPO_URL=https://github.com/csi-greifflab/Absolut\n\nRUN mkdir -p $INSTALL_DIR && mkdir -p $TEMP_DIR\n\nRUN git clone $REPO_URL $TEMP_DIR && \\\n    cd $TEMP_DIR/src && \\\n    sed -i 's/-Wl//g' Makefile && \\\n    make && \\\n    mv AbsolutNoLib /usr/local/bin/AbsolutNoLib\n\nRUN rm -rf $TEMP_DIR\n\nCOPY prepare_antigen.sh /usr/local/bin/prepare_antigen.sh\nRUN chmod +x /usr/local/bin/prepare_antigen.sh\n\nWORKDIR $INSTALL_DIR"
  },
  {
    "path": "resources/docker/absolut_image/prepare_antigen.sh",
    "content": "#!/bin/bash\n\n# 检查是否提供了 antigen 参数\nif [ -z \"$1\" ]; then\n    echo \"Usage: $0 <antigen>\"\n    exit 1\nfi\n\nANTIGEN=$1\nINSTALL_DIR=/usr/local/Absolut\n\n# 确保工作目录存在\nmkdir -p $INSTALL_DIR\ncd $INSTALL_DIR\n\n# 获取文件名和下载 URL\ninfo_output=$(AbsolutNoLib info_filenames $ANTIGEN)\nfilename=$(echo \"$info_output\" | grep -oP '(?<=Pre-calculated structures are in )[^\\s]+')\nurl=$(echo \"$info_output\" | grep -oP '(?<=curl -O -J )[^\\s]+')\n\n# 检查文件是否已经存在\nif [ -f \"$INSTALL_DIR/${filename}\" ]; then\n    echo \"File ${filename} already exists. Skipping download.\"\nelse\n    if [ -n \"$url\" ]; then\n        echo \"Downloading from URL: $url\"\n        download_filename=$(basename $url)\n        wget $url -O $INSTALL_DIR/$download_filename\n        if [ $? -eq 0 ]; then\n            unzip -o $INSTALL_DIR/$download_filename -d $INSTALL_DIR\n            rm $INSTALL_DIR/$download_filename\n        else\n            echo \"Download failed for $ANTIGEN\"\n            exit 1\n        fi\n    else\n        echo \"No URL found for antigen: $ANTIGEN\"\n        exit 1\n    fi\nfi"
  },
  {
    "path": "scripts/init_csstuning.sh",
    "content": "#!/bin/bash\npip install transopt_external/csstuning\n\nbash transopt_external/csstuning/cssbench/compiler/docker/build_docker.sh\nbash transopt_external/csstuning/cssbench/dbms/docker/build_docker.sh\n\ncsstuning_dbms_init -h\n"
  },
  {
    "path": "scripts/init_docker.sh",
    "content": "#!/bin/bash\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" &>/dev/null && pwd)\"\nDOCKER_ROOT_DIR=\"$SCRIPT_DIR/../resources/docker\"\n\nremove_old_images() {\n    local image_name=$1\n\n    old_image_ids=$(docker images -q --filter \"reference=$image_name\" | tail -n +2)\n    \n    if [ -n \"$old_image_ids\" ]; then\n        echo \"Removing old Docker image(s) with name '$image_name'...\"\n        docker rmi -f $old_image_ids\n    fi\n    \n    dangling_image_ids=$(docker images -f \"dangling=true\" -q)\n    if [ -n \"$dangling_image_ids\" ]; then\n        echo \"Removing dangling images...\"\n        docker rmi -f $dangling_image_ids\n    fi\n}\n\nbuild_docker_image() {\n    local image_name=$1\n    local docker_dir=$2\n    \n    if [ -f \"$docker_dir/Dockerfile\" ]; then\n        echo \"Building Docker image '$image_name'...\"\n        docker build -t \"$image_name\" \"$docker_dir\"\n        echo \"Docker image '$image_name' created successfully.\"\n\n        remove_old_images \"$image_name\"\n    else\n        echo \"Dockerfile not found in $docker_dir\"\n        exit 1\n    fi\n}\n\n# 构建 absolut_image\nbuild_docker_image \"absolut_image\" \"$DOCKER_ROOT_DIR/absolut_image\"\n"
  },
  {
    "path": "setup.py",
    "content": "import os\nimport json\nfrom setuptools import setup, find_packages\nimport subprocess\n\ndef get_extra_requirements(folder='./extra_requirements'):\n    \"\"\" Helper function to read in all extra requirement files in the specified\n        folder. \"\"\"\n    extra_requirements = {}\n    if not os.path.exists(folder):\n        print(f\"Folder {folder} does not exist.\")\n        return extra_requirements\n\n    for file in os.listdir(folder):\n        if file.endswith('.json'):\n            with open(os.path.join(folder, file), 'r', encoding='utf-8') as fh:\n                requirements = json.load(fh)\n                extra_requirements.update(requirements)\n\n    print(f\"Extra requirements: {extra_requirements}\")\n    return extra_requirements\n\nextra_requirements = get_extra_requirements()\n\n\ndef build_docker_image(image_name, docker_dir):\n    dockerfile_path = os.path.join(docker_dir, 'Dockerfile')\n    \n    if os.path.exists(dockerfile_path):\n        print(f\"Building Docker image {image_name}...\")\n        subprocess.run(['docker', 'build', '-t', image_name, docker_dir], check=True)\n        print(f\"Docker image '{image_name}' created successfully.\")\n    else:\n        print(f\"Dockerfile not found at {dockerfile_path}\")\n        raise FileNotFoundError(f\"Dockerfile not found at {dockerfile_path}\")\n\ndef init_absolut_docker():\n    docker_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'resources/docker/absolut_image')\n    build_docker_image('absolut_image', docker_dir)\n\nreq = [\n    \"scipy>=1.4.1\",\n    \"numpy>=1.18.1\",\n    \"ConfigSpace>=0.4.12\",\n    \"scikit-learn\",\n    \"openml\",\n    \"matplotlib\",\n    \"torch\",\n    \"torchvision\",\n    \"gpytorch\",\n    \n    # \"GPy\",\n    \"GPyOpt\",\n    \"gym\",\n    \"sobol-seq\",\n    \"xgboost\",\n    \"paramz\",\n    \"emukit\",\n    \"pymoo\",\n    \"jax\",\n    \"networkx\",\n    \"gplearn\",\n    \"oslo.concurrency>=4.2.0\",\n    'GPy @ git+https://github.com/SheffieldML/GPy.git@devel',\n    'mmh3',\n    'rich',\n    'flask_cors',\n    'openai'\n]\n\nsetup(\n    name=\"transopt\",\n    version=\"0.0.1\",\n    author=\"transopt\",\n    description=\"Transfer Optimiztion System\",\n    long_description=\"This is a longer description of my package.\",\n    url=\"https://github.com/maopl/TransOpt.git\",\n    classifiers=[\n        \"Development Status :: 3 - Alpha\",\n        \"Intended Audience :: Developers\",\n        \"License :: OSI Approved :: MIT License\",\n        \"Programming Language :: Python :: 3\",\n    ],\n    license=\"BSD\",\n    packages=find_packages(exclude=[\"hpobench\"]),\n    install_requires=req,\n    extras_require=extra_requirements,\n    entry_points={\n        'console_scripts': [\n            'transopt-server = transopt.agent.app:main',\n            'init-absolut-docker = transopt.scripts.init_docker:init_absolut_docker',\n        ],\n    }\n)\n"
  },
  {
    "path": "tests/EXP_NSGA2.py",
    "content": "import numpy as np\nfrom pymoo.algorithms.moo.nsga2 import NSGA2\nfrom pymoo.optimize import minimize\nfrom pymoo.core.problem import Problem\nfrom transopt.benchmark.HPO.HPO import HPO_ERM\n\nclass HPOProblem(Problem):\n    def __init__(self, task_name, budget_type, budget, seed, workload):\n        self.hpo = HPO_ERM(task_name=task_name, budget_type=budget_type, budget=budget, seed=seed, workload=workload, algorithm='ERM', gpu_id=0, augment='cutout', architecture='resnet', model_size=18, optimizer='nsga2_augment_cutout', base_dir='/data/')\n\n        original_ranges = self.hpo.configuration_space.original_ranges\n        n_var = len(original_ranges)\n        xl = np.array([original_ranges[key][0] for key in original_ranges])\n        xu = np.array([original_ranges[key][1] for key in original_ranges])\n        super().__init__(n_var=n_var, n_obj=2, n_constr=0, xl=xl, xu=xu)\n    \n    def _evaluate(self, X, out, *args, **kwargs):\n        f1 = []\n        f2 = []\n        for x in X:\n            config = {}\n            for i, param_name in enumerate(self.hpo.configuration_space.original_ranges):\n                if param_name == 'epoch':\n                    config[param_name] = int(x[i])\n                else:\n                    config[param_name] = x[i]\n            val_acc = self.hpo.objective_function(config)\n            f1.append(1 - val_acc['test_standard_acc'])  # Minimize 1 - accuracy\n            f2.append(1- val_acc['test_robust_acc'])  # Minimize number of epochs\n        out[\"F\"] = np.column_stack([f1, f2])\n\nif __name__ == \"__main__\":\n    problem = HPOProblem(task_name='test_task', budget_type='FEs', budget=3000, seed=0, workload=0)\n    algorithm = NSGA2(pop_size=40)\n    res = minimize(problem, algorithm, ('n_gen', 50), seed=1, verbose=True)\n    \n    print(\"Best solutions found:\")\n    for i in range(len(res.X)):\n        print(f\"Solution {i+1}: {res.X[i]}, Objectives: {res.F[i]}\")\n"
  },
  {
    "path": "tests/EXP_NSGA2_restart.py",
    "content": "import numpy as np\nfrom pymoo.algorithms.moo.nsga2 import NSGA2\nfrom pymoo.optimize import minimize\nfrom pymoo.core.problem import Problem\nfrom transopt.benchmark.HPO.HPO import HPO_ERM\nimport os\nimport pandas as pd\nfrom pymoo.util.nds.non_dominated_sorting import NonDominatedSorting\nimport matplotlib.pyplot as plt  # 添加這行\nfrom pymoo.core.population import Population\n\n\n\nclass HPOProblem(Problem):\n    def __init__(self, task_name, budget_type, budget, seed, workload, data_file):\n        self.hpo = HPO_ERM(task_name=task_name, budget_type=budget_type, budget=budget, seed=seed, workload=workload, algorithm='ERM',architecture='resnet', model_size=18, optimizer='nsga2_augment_true')\n        original_ranges = self.hpo.configuration_space.original_ranges\n        n_var = len(original_ranges)\n        xl = np.array([original_ranges[key][0] for key in original_ranges])\n        xu = np.array([original_ranges[key][1] for key in original_ranges])\n        super().__init__(n_var=n_var, n_obj=2, n_constr=0, xl=xl, xu=xu)\n\n        # Load data from specified file\n        self.data = {}\n        for filename in os.listdir(data_file):\n            file_path = os.path.join(data_file, filename)\n            if os.path.isfile(file_path):\n                import json\n                import re\n                with open(file_path, 'r') as f:\n                    \n                    content = json.load(f)\n                    # Extract decision variables from filename\n                    x = []\n                    for key in original_ranges.keys():\n                        pattern = rf'({key}_)([\\d.e-]+)'\n                        match = re.search(pattern, filename)\n                        if match:\n                            value = float(match.group(2))\n                            if key in ['l', 'weight_decay']:\n                                value = np.log10(value)\n                            x.append(value)\n                        elif key == 'epoch':\n                            # Special handling for 'epoch' which is an integer\n                            epoch_match = re.search(r'epoch_(\\d+)', filename)\n                            if epoch_match:\n                                x.append(int(epoch_match.group(1)))\n                        elif key in ['data_augmentation', 'class_balanced', 'nonlinear_classifier']:\n                            # Special handling for boolean values\n                            bool_match = re.search(rf'{key}_(True|False)', filename)\n                            if bool_match:\n                                x.append(bool_match.group(1) == 'True')\n                    self.data[filename] = {\n                        'x': x,\n                        'test_standard_acc': content['test_standard_acc'],\n                        'test_robust_acc': np.mean([v for k, v in content.items() if k.startswith('test_') and k != 'test_standard_acc'])\n                    }\n    def _evaluate(self, X, out, *args, **kwargs):\n        f1 = []\n        f2 = []\n        for x in X:\n            config = {}\n            for i, param_name in enumerate(self.hpo.configuration_space.original_ranges):\n                if param_name == 'epoch':\n                    config[param_name] = int(x[i])\n                else:\n                    config[param_name] = x[i]\n            val_acc = self.hpo.objective_function(config)\n            f1.append(1 - val_acc['test_standard_acc'])  # Minimize 1 - accuracy\n            f2.append(1- np.mean([v for k, v in val_acc.items() if k.startswith('test_') and k != 'test_standard_acc']))  # Minimize number of epochs\n        out[\"F\"] = np.column_stack([f1, f2])\n\nif __name__ == \"__main__\":\n    data_file = '/home/haxx/transopt_tmp/output/results/nsga2_false_augment_ERM_resnet_18_RobCifar10_0'\n    problem = HPOProblem(task_name='test_task', budget_type='FEs', budget=3000, seed=0, workload=0, \n                         data_file=data_file)\n    \n    # Extract objectives from the loaded data\n    F = np.array([[1 - data['test_standard_acc'], 1 - data['test_robust_acc']] for data in problem.data.values()])\n    \n    # Perform non-dominated sorting\n    nds = NonDominatedSorting()\n    fronts = nds.do(F)\n    \n    # Initialize lists for the initial population\n    initial_X = []\n    initial_F = []\n    pop_size = 40  # Assuming a population size of 40, adjust as needed\n\n    # Iterate through fronts and add solutions layer by layer\n    for front in fronts:\n        front_solutions = [list(problem.data.values())[i] for i in front]\n        \n        if len(initial_X) + len(front_solutions) <= pop_size:\n            initial_X.extend([sol['x'] for sol in front_solutions])\n            initial_F.extend([[1 - sol['test_standard_acc'], 1 - sol['test_robust_acc']] for sol in front_solutions])\n        else:\n            remaining_slots = pop_size - len(initial_X)\n            if remaining_slots > 0:\n                # Use niching to select the most diverse solutions from the current front\n                front_x = np.array([sol['x'] for sol in front_solutions])\n                from scipy.spatial.distance import cdist\n                distances = cdist(front_x, front_x)\n                selected_indices = []\n                \n                while len(selected_indices) < remaining_slots:\n                    if len(selected_indices) == 0:\n                        selected_indices.append(np.random.choice(len(front_x)))\n                    else:\n                        min_distances = np.min(distances[:, selected_indices], axis=1)\n                        min_distances[selected_indices] = -np.inf\n                        selected_indices.append(np.argmax(min_distances))\n                \n                initial_X.extend([front_solutions[i]['x'] for i in selected_indices])\n                initial_F.extend([[1 - front_solutions[i]['test_standard_acc'], 1 - front_solutions[i]['test_robust_acc']] for i in selected_indices])\n            break\n\n    # Create the initial population with X, F, and set evaluated correctly\n    initial_pop = Population.new(X=np.array(initial_X), F=np.array(initial_F))\n\n    for ind in initial_pop:\n        ind.evaluated = {\"F\", \"CV\"}  # Set evaluated to include both F and CV\n\n    # 創建NSGA2算法\n    algorithm = NSGA2(pop_size=len(initial_pop))\n    \n    # 設置總迭代次數\n    total_evaluations = 2000\n    current_evaluations = len(problem.data)\n    remaining_evaluations = total_evaluations - current_evaluations\n    remaining_generations = max(1, remaining_evaluations // pop_size)\n\n    # 繼續優化\n    res = minimize(problem, algorithm, ('n_gen', remaining_generations), seed=1, verbose=True)\n    \n    print(\"Best solutions found:\")\n    for i in range(len(res.X)):\n        print(f\"Solution {i+1}: {res.X[i]}, Objectives: {res.F[i]}\")\n"
  },
  {
    "path": "tests/EXP_bohb.py",
    "content": "from bohb import BOHB\nimport bohb.configspace as cs\nfrom transopt.benchmark.HPO.HPO import HPO_ERM\nimport numpy as np\n\n# Create a single HPO_ERM instance\nhpo = HPO_ERM(task_name='bohb_optimization', budget_type='FEs', budget=2000, seed=42, workload=0,algorithm='ERM',architecture='resnet', model_size=18, optimizer='bohb')\n\n# Define the objective function\ndef objective(config, budget):\n    result = hpo.objective_function(configuration=config, fidelity={'epoch': int(budget)})\n    return 1 - result['function_value']  # BOHB minimizes, so we return the function value directly\n\n# Define the configuration space\ndef get_configspace():\n    original_ranges = hpo.configuration_space.original_ranges\n    hyperparameters = [cs.UniformHyperparameter(param_name, lower=param_range[0], upper=param_range[1]) for param_name, param_range in original_ranges.items() ]\n    space = cs.ConfigurationSpace(hyperparameters)\n    \n    return space\n\nif __name__ == \"__main__\":\n    # Create the configuration space\n    config_space = get_configspace()\n    \n    # Initialize BOHB\n    bohb = BOHB(configspace=config_space,\n                eta=3, min_budget=1, max_budget=50, n_samples=200,\n                evaluate=objective)\n    \n    # Run optimization\n    results = bohb.optimize()\n    "
  },
  {
    "path": "tests/EXP_grid.py",
    "content": "import numpy as np\nfrom transopt.benchmark.HPO.HPO import HPO_ERM\nfrom scipy.stats import qmc\n\ndef sobol_search(n_samples, task_name, budget_type, budget, seed, workload):\n    hpo = HPO_ERM(task_name=task_name, budget_type=budget_type, budget=budget, seed=seed, workload=workload, optimizer='sobol')\n    original_ranges = hpo.configuration_space.original_ranges\n    n_var = len(original_ranges)\n    xl = np.array([original_ranges[key][0] for key in original_ranges])\n    xu = np.array([original_ranges[key][1] for key in original_ranges])\n    \n    # 创建Sobol序列采样器\n    sampler = qmc.Sobol(d=n_var, scramble=True, seed=seed)\n    \n    # 生成Sobol序列样本\n    sample = sampler.random(n=n_samples)\n    \n    # 将样本从[0,1]范围映射到参数实际范围\n    scaled_sample = qmc.scale(sample, xl, xu)\n    \n    best_val_acc = 0\n    best_config = None\n    \n    for i in range(n_samples):\n        config = {}\n        for j, param_name in enumerate(original_ranges.keys()):\n            config[param_name] = scaled_sample[i, j]\n        \n        # 运行目标函数\n        result = hpo.objective_function(configuration=config)\n        val_acc = 1 - result['function_value']  # 因为我们最小化的是1-accuracy\n        \n        print(f\"Trial {i + 1}/{n_samples}\")\n        print(f\"Configuration: {config}\")\n        print(f\"Validation Accuracy: {val_acc}\")\n        \n        if val_acc > best_val_acc:\n            best_val_acc = val_acc\n            best_config = config\n        \n        print(f\"Best Validation Accuracy so far: {best_val_acc}\")\n        print(\"--------------------\")\n    \n    print(\"\\nSobol Search Completed\")\n    print(f\"Best Configuration: {best_config}\")\n    print(f\"Best Validation Accuracy: {best_val_acc}\")\n\nif __name__ == \"__main__\":\n    # 设置随机种子以确保可重复性\n    np.random.seed(0)\n    \n    # 运行Sobol序列搜索\n    sobol_search(\n        n_samples=5000,  # 指定采样数量\n        task_name='sobol_search_hpo',\n        budget_type='FEs',\n        budget=5000,\n        seed=0,\n        workload=0  # 对应于 RobCifar10 数据集\n    )\n"
  },
  {
    "path": "tests/EXP_hebo.py",
    "content": "import numpy as np\nfrom hebo.design_space.design_space import DesignSpace\nfrom hebo.optimizers.hebo import HEBO\nfrom transopt.benchmark.HPO.HPO import HPO_ERM\n\n# Create a single HPO_ERM instance\nhpo = HPO_ERM(task_name='hebo_optimization', budget_type='FEs', budget=2000, seed=42, workload=0, algorithm='ERM',architecture='resnet', model_size=18, optimizer='hebo')\n\n# Define the objective function\ndef objective(config):\n    result = hpo.objective_function(configuration=config)\n    return 1 - result['function_value']\n\n# Define the design space\ndef get_design_space():\n    original_ranges = hpo.configuration_space.original_ranges\n    space = DesignSpace().parse([\n        {'name': param_name, 'type': 'num', 'lb': param_range[0], 'ub': param_range[1]}\n        for param_name, param_range in original_ranges.items()\n    ])\n    return space\n\nif __name__ == \"__main__\":\n    # Create the design space\n    design_space = get_design_space()\n    \n    # Initialize HEBO\n    opt = HEBO(design_space, scramble_seed=0)\n    \n    # Run optimization\n    n_iterations = 200\n    for i in range(n_iterations):\n        rec = opt.suggest(n_suggestions=1)\n        f_val = objective(rec.to_dict(orient='records')[0])\n        y = np.array([[f_val]])\n        opt.observe(rec, y)\n        print(f'After {i+1} iterations, best obj is {opt.y.min():.4f}')\n\n\n"
  },
  {
    "path": "tests/EXP_hyperopt.py",
    "content": "from hyperopt import fmin, tpe, hp, STATUS_OK, Trials\nfrom transopt.benchmark.HPO.HPO import HPO_ERM\nimport numpy as np\n\n# Create a single HPO_ERM instance\nhpo = HPO_ERM(task_name='hyperopt_optimization', budget_type='FEs', budget=2000, seed=0, workload=0,algorithm='ERM',architecture='resnet', model_size=18, optimizer='hyperopt')\n\n# Define the objective function\ndef objective(params):\n    # Convert hyperopt params to the format expected by HPO_ERM\n    config = {k: v[0] if isinstance(v, list) else v for k, v in params.items()}\n    result = hpo.objective_function(configuration=config, fidelity={'epoch': 50})\n    return {'loss': 1 - result['function_value'], 'status': STATUS_OK}\n\n# Define the search space\ndef get_hyperopt_space():\n    original_ranges = hpo.configuration_space.original_ranges\n    space = {}\n    for param_name, param_range in original_ranges.items():\n        space[param_name] = hp.uniform(param_name, param_range[0], param_range[1])\n    return space\n\nif __name__ == \"__main__\":\n    # Create the search space\n    search_space = get_hyperopt_space()\n    \n    # Run optimization\n    n_iterations = 200\n    trials = Trials()\n    # Set a random seed for reproducibility\n    random_seed = 42\n    np.random.seed(random_seed)\n    \n    best = fmin(fn=objective,\n                space=search_space,\n                algo=tpe.suggest,\n                max_evals=n_iterations,\n                trials=trials,\n                rstate=np.random.default_rng(random_seed))\n    \n    # Print results\n    print(\"Best hyperparameters found:\", best)\n    print(\"Best objective value:\", 1 - min(trials.losses()))\n"
  },
  {
    "path": "tests/EXP_random.py",
    "content": "import numpy as np\nfrom transopt.benchmark.HPO.HPO import HPO_ERM\nimport random\n\ndef random_search(n_trials, task_name, budget_type, budget, seed, workload):\n    hpo = HPO_ERM(task_name=task_name, budget_type=budget_type, budget=budget, seed=seed, workload=workload, optimizer='random')\n    \n    original_ranges = hpo.configuration_space.original_ranges\n    n_var = len(original_ranges)\n    xl = np.array([original_ranges[key][0] for key in original_ranges])\n    xu = np.array([original_ranges[key][1] for key in original_ranges])\n    \n    # 用于存储已经尝试过的配置\n    tried_configs = set()\n    \n    best_val_acc = 0\n    best_config = None\n    \n    for trial in range(n_trials):\n        # 生成新的配置，直到得到一个未尝试过的配置\n        while True:\n            config = {}\n            for i, name in enumerate(original_ranges.keys()):\n                config[name] = np.random.uniform(xl[i], xu[i])\n            \n            # 将配置转换为不可变的类型（元组），以便可以添加到集合中\n            config_tuple = tuple(sorted(config.items()))\n            if config_tuple not in tried_configs:\n                tried_configs.add(config_tuple)\n                break\n        \n        # 设置固定的fidelity值\n        \n        # 运行目标函数\n        result = hpo.objective_function(configuration=config)\n        val_acc = 1 - result['function_value']  # 因为我们最小化的是1-accuracy\n        \n        print(f\"Trial {trial + 1}/{n_trials}\")\n        print(f\"Configuration: {config}\")\n        print(f\"Validation Accuracy: {val_acc}\")\n        \n        if val_acc > best_val_acc:\n            best_val_acc = val_acc\n            best_config = config\n        \n        print(f\"Best Validation Accuracy so far: {best_val_acc}\")\n        print(\"--------------------\")\n    \n    print(\"\\nRandom Search Completed\")\n    print(f\"Best Configuration: {best_config}\")\n    print(f\"Best Validation Accuracy: {best_val_acc}\")\n\nif __name__ == \"__main__\":\n    # 设置随机种子以确保可重复性\n    np.random.seed(0)\n    random.seed(0)\n    \n    # 运行随机搜索\n    random_search(\n        n_trials=5000,  # 指定随机搜索的次数\n        task_name='random_search_hpo',\n        budget_type='FEs',\n        budget=5000,\n        seed=0,\n        workload=0  # 对应于 RobCifar10 数据集\n    )\n"
  },
  {
    "path": "tests/EXP_smac.py",
    "content": "from ConfigSpace import ConfigurationSpace\nimport ConfigSpace as cs\nimport numpy as np\nimport time\nfrom smac import HyperparameterOptimizationFacade, Scenario\nfrom transopt.benchmark.HPO.HPO import HPO_ERM\n\n# Create a single HPO_ERM instance\nhpo = HPO_ERM(task_name='smac_optimization', budget_type='FEs', budget=2000, seed=42, workload=0,algorithm='ERM',architecture='resnet', model_size=18, optimizer='smac')\n\n# Define the objective function\ndef objective(configuration, seed: int = 0):\n    start = time.time()\n    result = hpo.objective_function(configuration=configuration.get_dictionary())\n    end = time.time()\n    return 1 - result['function_value']  # SMAC minimizes, so we return 1 - accuracy\n\n# Define the configuration space\ndef get_configspace():\n    space = ConfigurationSpace()\n    original_ranges = hpo.configuration_space.original_ranges\n    for param_name, param_range in original_ranges.items():\n        space.add_hyperparameter(cs.UniformFloatHyperparameter(param_name, lower=param_range[0], upper=param_range[1]))\n    return space\n\nif __name__ == \"__main__\":\n    # Create the configuration space\n    config_space = get_configspace()\n    \n    # Scenario object specifying the optimization environment\n    scenario = Scenario(config_space, deterministic=True, n_trials=200)\n    \n    # Use SMAC to find the best configuration/hyperparameters\n    smac = HyperparameterOptimizationFacade(scenario, objective)\n    incumbent = smac.optimize()\n    \n    # Print the best configuration and its performance\n    print(f\"Best configuration: {incumbent}\")\n    print(f\"Best performance: {1 - smac.intensifier.trajectory[-1].cost}\")  # Convert back to accuracy\n"
  },
  {
    "path": "tests/EXP_tpe.py",
    "content": "import ConfigSpace as cs\nimport time\nimport numpy as np\nfrom typing import Any, Dict, List, Optional, Protocol, Tuple\n\nfrom tpe.optimizer import TPEOptimizer\n\nfrom transopt.benchmark.HPO.HPO import HPO_ERM\nfrom tpe.optimizer.base_optimizer import BaseOptimizer, ObjectiveFunc\n\n# Create a single HPO_ERM instance\nhpo = HPO_ERM(task_name='tpe_optimization', budget_type='FEs', budget=100, seed=42, workload=0, optimizer='tpe')\n\nclass formal_obj(ObjectiveFunc):\n    def __init__(self, f):\n        self.f = f\n    \n    def __call__(self, eval_config: Dict[str, Any]) -> Tuple[Dict[str, float], float]:\n        start = time.time()\n        results = self.f(eval_config)\n        return {'loss': 1 - results['function_value']}, time.time() - start\n\n# Create an instance of formal_obj with hpo.objective_function\n\n# Define the configuration space\ndef get_configspace():\n    original_ranges = hpo.configuration_space.original_ranges\n    hyperparameters = [cs.UniformFloatHyperparameter(param_name, lower=param_range[0], upper=param_range[1]) for param_name, param_range in original_ranges.items() ]\n    space = cs.ConfigurationSpace(hyperparameters)\n    \n    return space\n\nif __name__ == \"__main__\":\n    # Create the configuration space\n    config_space = get_configspace()\n    obj_f = formal_obj(hpo.objective_function)\n\n    # Initialize TPE Optimizer\n    opt = TPEOptimizer(obj_func=obj_f, config_space=config_space, n_init=10, max_evals=100, resultfile='tpe_results.json')\n    \n    # Run optimization\n    best_config, best_value = opt.optimize()\n"
  },
  {
    "path": "tests/data_analysis.py",
    "content": "import os\nimport json\nimport re\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom pymoo.util.nds.non_dominated_sorting import NonDominatedSorting\nfrom scipy import stats\nfrom statsmodels.formula.api import ols\nfrom sklearn.decomposition import PCA\nfrom mpl_toolkits.mplot3d import Axes3D\n\ndef load_data(data_folder):\n    data = {}\n    for filename in os.listdir(data_folder):\n        file_path = os.path.join(data_folder, filename)\n        if os.path.isfile(file_path):\n            with open(file_path, 'r') as f:\n                content = json.load(f)\n                x = []\n                for key in ['lr', 'weight_decay', 'momentum', 'dropout_rate']:\n                    pattern = rf'({key}_)([\\d.e-]+)'\n                    match = re.search(pattern, filename)\n                    if match:\n                        value = float(match.group(2))\n                        if key in ['lr', 'weight_decay']:\n                            x.append(np.log10(value))\n                        else:\n                            x.append(value)\n                data[filename] = {\n                    'x': x,\n                    'test_standard_acc': content['test_standard_acc'],\n                    'test_robust_acc': np.mean([v for k, v in content.items() if k.startswith('test_') and k != 'test_standard_acc'])\n                }\n    return data\n\ndef get_non_dominated_solutions(data):\n    F = np.array([[1 - d['test_standard_acc'], 1 - d['test_robust_acc']] for d in data.values()])\n    nds = NonDominatedSorting()\n    fronts = nds.do(F)\n    non_dominated = fronts[0]\n    return F[non_dominated]\n\ndef plot_non_dominated_solutions(ax, solutions, label, color):\n    ax.scatter(solutions[:, 0], solutions[:, 1], label=label, color=color)\n    # ax.plot(solutions[:, 0], solutions[:, 1], color=color)\n# 主函数\n    \n\ndef compare_nsga2_results_all(res):\n    fig, ax = plt.subplots(figsize=(10, 8))\n\n    # Define color mapping\n    colors = plt.cm.rainbow(np.linspace(0, 1, len(res)))\n\n    for (k, v), color in zip(res.items(), colors):\n        data = load_data(v)\n        \n        # Get all data points\n        all_points = np.array([[1 - d['test_standard_acc'], 1 - d['test_robust_acc']] for d in data.values()])\n        \n        # Get non-dominated solutions\n        non_dominated_data = get_non_dominated_solutions(data)\n\n        # Plot all points with transparency\n        ax.scatter(all_points[:, 0], all_points[:, 1], label=f'{k} (all)', color=color, alpha=0.3)\n        \n        # Plot non-dominated solutions without transparency\n        ax.scatter(non_dominated_data[:, 0], non_dominated_data[:, 1], label=f'{k} (non-dominated)', color=color, edgecolors='black')\n\n    # Set chart properties\n    ax.set_xlabel('Test Standard Accuracy')\n    ax.set_ylabel('Test Robust Accuracy')\n    ax.set_title('Comparison of Solutions for Different Sizes')\n    ax.legend()\n    ax.grid(True)\n\n    # Invert x and y axes to show accuracy instead of error\n    ax.invert_xaxis()\n    ax.invert_yaxis()\n\n    # Save the chart\n    plt.savefig('compare_all.png')\n    plt.close(fig)\n    \n\ndef compare_nsga2_results(res):\n    # 加载两个文件夹的数据\n    all_res = {}\n    fig, ax = plt.subplots(figsize=(10, 8))\n\n    # 定义颜色映射\n    colors = plt.cm.rainbow(np.linspace(0, 1, len(res)))\n\n    for (k, v), color in zip(res.items(), colors):\n        data = load_data(v)\n        \n        # 获取非支配解\n        non_dominated_data = get_non_dominated_solutions(data)\n\n        # 对非支配解进行1-操作\n        non_dominated_data = 1 - non_dominated_data\n\n        # 绘制非支配解，使用不同的颜色\n        plot_non_dominated_solutions(ax, non_dominated_data, f'{k}', color)\n\n    # 设置图表属性\n    ax.set_xlabel('Test Standard Accuracy')\n    ax.set_ylabel('Test Robust Accuracy')\n    ax.set_title('Comparison of Non-Dominated Solutions for Different Sizes')\n    ax.legend()\n    ax.grid(True)\n\n    # 显示图表\n    plt.savefig('compare.png')\n    \ndef calculate_variable_importance(res):\n    # 聚合所有场景的数据\n    aggregated_data = {}\n    for k, v in res.items():\n        data = load_data(v)\n        for key, value in data.items():\n            if key not in aggregated_data:\n                aggregated_data[key] = value\n\n    # 定义变量名称\n    variable_names = ['lr', 'weightdecay', 'momentum', 'dropout_rate']\n    importance = {'test_standard_acc': {}, 'test_robust_acc': {}}\n\n    for i, var_name in enumerate(variable_names):\n        X = np.array([entry['x'][i] for entry in aggregated_data.values()])\n        y_standard = np.array([entry['test_standard_acc'] for entry in aggregated_data.values()])\n        y_robust = np.array([entry['test_robust_acc'] for entry in aggregated_data.values()])\n\n        # 使用线性回归模型来评估变量重要性\n        model_standard = ols(f'y ~ x', data={'x': X, 'y': y_standard}).fit()\n        model_robust = ols(f'y ~ x', data={'x': X, 'y': y_robust}).fit()\n\n        # 计算F-value和p-value作为重要性指标\n        importance['test_standard_acc'][var_name] = {\n            'f_value': model_standard.fvalue,\n            'p_value': model_standard.f_pvalue\n        }\n        importance['test_robust_acc'][var_name] = {\n            'f_value': model_robust.fvalue,\n            'p_value': model_robust.f_pvalue\n        }\n\n    return importance\n\ndef plot_variable_importance(importance):\n    vars = list(importance['test_standard_acc'].keys())\n    f_values_standard = [stats['f_value'] for stats in importance['test_standard_acc'].values()]\n    f_values_robust = [stats['f_value'] for stats in importance['test_robust_acc'].values()]\n\n    x = np.arange(len(vars))\n    width = 0.35\n\n    fig, ax = plt.subplots(figsize=(12, 6))\n    rects1 = ax.bar(x - width/2, f_values_standard, width, label='Test Standard Accuracy')\n    rects2 = ax.bar(x + width/2, f_values_robust, width, label='Test Robust Accuracy')\n\n    ax.set_ylabel('F-value')\n    ax.set_title('Variable Importance Comparison')\n    ax.set_xticks(x)\n    ax.set_xticklabels(vars, rotation=45)\n    ax.legend()\n\n    plt.tight_layout()\n    plt.savefig('variable_importance_comparison.png')\n    plt.close()\n\ndef visualize_data_with_metrics(data, metric_name, output_file):\n    \"\"\"\n    Visualize the input data with corresponding metrics.\n    \n    :param data: A dictionary where keys are sample names and values are dictionaries\n                 containing 'x' (input variables) and metric values.\n    :param metric_name: The name of the metric to visualize.\n    :param output_file: The name of the file to save the plot.\n    \"\"\"\n    X = np.array([sample['x'] for sample in data.values()])\n    y = np.array([sample[metric_name] for sample in data.values()])\n\n    # Check the dimensionality of X\n    n_dims = X.shape[1]\n\n    if n_dims <= 2:\n        # If X has 2 or fewer dimensions, plot directly\n        fig = plt.figure(figsize=(10, 8))\n        if n_dims == 1:\n            plt.scatter(X, y, c=y, cmap='viridis')\n            plt.xlabel('Input Variable')\n        else:  # n_dims == 2\n            plt.scatter(X[:, 0], X[:, 1], c=y, cmap='viridis')\n            plt.xlabel('First Input Variable')\n            plt.ylabel('Second Input Variable')\n    else:\n        # If X has more than 2 dimensions, use PCA for dimensionality reduction\n        pca = PCA(n_components=2)\n        X_pca = pca.fit_transform(X)\n\n        fig = plt.figure(figsize=(10, 8))\n        plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, cmap='viridis')\n        plt.xlabel('First Principal Component')\n        plt.ylabel('Second Principal Component')\n\n    plt.colorbar(label=metric_name)\n    plt.title(f'{metric_name} vs Input Variables')\n    plt.tight_layout()\n    plt.savefig(output_file)\n    plt.close(fig)\n\n    print(f\"Plot saved as {output_file}\")\n\n# Example usage in the main function:\nif __name__ == \"__main__\":\n    # results_18 = './results_aug/non_augment'\n    # results_34 = './results_size/results_34'\n    results_50 = './results_size/results_50'\n    # res = {'res_18':results_18, 'res_34':results_34, 'res_50':results_50}\n    # compare_nsga2_results_all(res)\n    \n    \n    # results_non_aug = './results_aug/non_augment'\n    # results_aug =  './results_aug/augment'\n    res = { 'without augment':results_50}\n    # compare_nsga2_results_all(res)\n    \n\n    # 计算变量重要性\n    importance = calculate_variable_importance(res)\n    \n    # 打印重要性结果\n    print(\"Variable Importance:\")\n    for metric, vars in importance.items():\n        print(f\"\\n{metric}:\")\n        for var, stats in vars.items():\n            print(f\"  {var}: F-value = {stats['f_value']:.4f}, p-value = {stats['p_value']:.4f}\")\n\n    # 可视化重要性\n    plot_variable_importance(importance)\n\n    # Load data\n    results_non_aug = './results_size/results_34'\n    data = load_data(results_non_aug)\n\n    # Visualize data for standard accuracy\n    visualize_data_with_metrics(data, 'test_standard_acc', 'standard_acc_visualization.png')\n\n    # Visualize data for robust accuracy\n    visualize_data_with_metrics(data, 'test_robust_acc', 'robust_acc_visualization.png')\n"
  },
  {
    "path": "transopt/ResultAnalysis/AnalysisBase.py",
    "content": "import abc\nimport json\nfrom collections import defaultdict\nfrom dataclasses import dataclass\nfrom typing import Dict, Hashable, List, Tuple, Union\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\n\nfrom transopt.KnowledgeBase import KnowledgeBase\nfrom transopt.utils.serialization import (convert_np_to_bulidin,\n                                          output_to_ndarray,\n                                          vectors_to_ndarray)\n\n\n@dataclass\nclass Result():\n    \"\"\"\n    Class to store the results of the analysis.\n    \"\"\"\n    def __init__(self):\n        self.X = None\n        self.Y = None\n        self.best_X = None\n        self.best_Y = None\n\n\nclass AnalysisBase(abc.ABC, metaclass=abc.ABCMeta):\n    def __init__(self, exper_folder, methods, seeds, tasks, start = 0, end = None):\n        self._exper_folder = exper_folder\n        self._methods = methods\n        self._seeds = seeds\n        self._tasks = tasks\n        self._init = start\n        self._end = end\n        self.results = {}\n        self._task_names = set()\n        self._colors = self.assign_colors_to_methods()\n\n    def read_data_from_kb(self):\n        for method in self._methods:\n            self.results[method] = defaultdict(dict)\n            for seed in self._seeds:\n                self.results[method][seed] = defaultdict(dict)\n\n                file_path = f'{self._exper_folder}/{method}/{seed}_KB.json'\n                database = KnowledgeBase(file_path)\n\n                for dataset_id in database.get_all_dataset_id():\n                    dataset = database.get_dataset_by_id(dataset_id)\n                    task_name = dataset['name']\n                    if task_name.split('_')[0] not in self._tasks:\n                        continue\n\n                    input_vector = dataset['input_vector']\n                    output_value = dataset['output_value']\n                    r = Result()\n                    r.X = vectors_to_ndarray(dataset['dataset_info']['variable_name'], input_vector)\n                    r.Y = output_to_ndarray(output_value)\n                    if self._end is not None:\n                        r.X = r.X[:self._end]\n                        r.Y = r.Y[:self._end]\n                    else:\n                        assert len(r.Y) == len(r.X)\n                        self._end = len(r.Y)\n                    best_id = np.argmin(r.Y)\n                    r.best_Y = r.Y[best_id]\n                    r.best_X = r.X[best_id]\n\n                    self.results[method][seed][task_name] = r\n                    self._task_names.add(task_name)\n\n    def save_results_to_json(self, file_path):\n        with open(file_path, 'w') as f:\n            json.dump(self.results, f, default=convert_np_to_bulidin)\n\n    def load_results_from_json(self, file_path):\n        def convert(dct):\n            if 'type' in dct and dct['type'] == 'ndarray':\n                return np.array(dct['value'])\n            return dct\n\n        with open(file_path, 'r') as f:\n            self.results = json.load(f, object_hook=convert)\n\n    def get_results_by_order(self, order=None):\n        \"\"\"\n        Get results from the nested dictionary based on the specified order.\n        Args:\n            order (list, optional): The order in which results should be organized.\n                Defaults to [\"task\", \"method\", \"seed\"].\n        Returns:\n            dict: A dictionary of results organized according to the specified order.\n        \"\"\"\n\n        if order is None:\n            order = [\"task\", \"method\", \"seed\"]\n\n        valid_keys = {\"task\", \"method\", \"seed\"}\n        assert len(order) == 3 and set(order) == valid_keys, \"Order must be a permutation of 'task', 'method', and 'seed'\"\n\n        # Retrieve the corresponding category based on the type of the key\n        def get_key(key):\n            if key == 'task':\n                return self._task_names\n            elif key == 'method':\n                return self._methods\n            elif key == 'seed':\n                return self._seeds\n\n        # Retrieve the corresponding data from the existing results.\n        def get_from_original_results(key_list):\n            first_original_key = key_list[order.index('method')]\n            second_original_key = key_list[order.index('seed')]\n            third_original_key = key_list[order.index('task')]\n            return self.results[first_original_key][second_original_key][third_original_key]\n\n        # Define dictionaries for each level of order\n        levels = {key: get_key(key) for key in order}\n\n        new_results = {}\n        for first_key in levels[order[0]]:\n            new_results[first_key] = defaultdict(dict)\n            for second_key in levels[order[1]]:\n                new_results[first_key][second_key] = defaultdict(dict)\n                for third_key in levels[order[2]]:\n                    new_results[first_key][second_key][third_key] = get_from_original_results(\n                        [first_key, second_key, third_key])\n\n        return new_results\n\n    def assign_colors_to_methods(self):\n        \"\"\"\n        Assign a unique color from Matplotlib's 'tab10' color cycle to each method.\n\n        Args:\n        methods (list): A list of method names.\n\n        Returns:\n        dict: A dictionary where keys are method names and values are their assigned colors.\n        \"\"\"\n        # Using the 'tab10' color cycle from Matplotlib\n        rgb_colors = [\n            (141, 211, 199),\n            (255, 255, 179),\n            (190, 186, 218),\n            (251, 128, 114),\n            (128, 177, 211),\n            (253, 180, 98),\n            (179, 222, 105),\n            (252, 205, 229),\n            (217, 217, 217),\n            (188, 128, 189),\n            (204, 235, 197)\n        ]\n\n        color_strings = []\n        for rgb in rgb_colors:\n            color_str = f\"rgb,255:red,{rgb[0]}; green,{rgb[1]}; blue,{rgb[2]}\"\n            color_strings.append(color_str)\n\n        # Creating a dictionary to store method names and their assigned colors\n        method_colors = {}\n        for i, method in enumerate(self._methods):\n            color_index = i % len(color_strings)  # Cycle through colors if there are more methods than colors\n            color = color_strings[color_index]\n            method_colors[method] = color\n\n        return method_colors\n\n    def get_color_for_method(self, method:Union[List,str]):\n        \"\"\"\n        Get the color(s) associated with a specific method or a list of methods.\n\n        Args:\n        method (str or list): The name of the method or a list of method names.\n\n        Returns:\n        str or list: The hex color code(s) associated with the method(s).\n        \"\"\"\n        if isinstance(method, str):\n            if method not in self._colors:\n                raise ValueError(f\"Method {method} not found in colors dictionary\")\n            return self._colors[method]\n\n        elif isinstance(method, list):\n            colors = []\n            for m in method:\n                if m not in self._colors:\n                    raise ValueError(f\"Method {m} not found in colors dictionary\")\n                colors.append(self._colors[m])\n            return colors\n\n        else:\n            raise TypeError(\"Input must be a string or a list of strings\")\n\n    def get_methods(self):\n        \"\"\"\n        Get the list of methods used in the analysis.\n\n        Returns:\n        list: A list of method names.\n        \"\"\"\n        return self._methods\n\n    def get_task_names(self):\n        \"\"\"\n        Get the list of task names used in the analysis.\n\n        Returns:\n        list: A list of task names.\n        \"\"\"\n        return self._task_names\n\n    def get_seeds(self):\n        \"\"\"\n        Get the list of seeds used in the analysis.\n\n        Returns:\n        list: A list of seeds.\n        \"\"\"\n        return self._seeds"
  },
  {
    "path": "transopt/ResultAnalysis/AnalysisPipeline.py",
    "content": "from transopt.ResultAnalysis.PlotAnalysis import plot_registry\nfrom transopt.ResultAnalysis.TableAnalysis import table_registry\nfrom transopt.ResultAnalysis.TrackOptimization import track_registry\nfrom transopt.ResultAnalysis.AnalysisBase import AnalysisBase\nfrom transopt.ResultAnalysis.AnalysisReport import create_report\n\n\n\n\n\ndef analysis_pipeline(Exper_folder, tasks, methods, seeds, args):\n    ab = AnalysisBase(Exper_folder, tasks=tasks,methods= methods,seeds= seeds)\n    ab.read_data_from_kb()\n    Exper_folder = Exper_folder / 'analysis'\n    if args.comparision:\n        for plot_name, plot_func in plot_registry.items():\n            plot_func(ab, Exper_folder)  # 假设你的度量函数需要额外的参数\n\n        for table_name, table_func in table_registry.items():\n            table_func(ab, Exper_folder)  # 假设你的度量函数需要额外的参数\n\n    if args.track:\n        pass\n\n    if args.report:\n        create_report(Exper_folder)\n\n\n"
  },
  {
    "path": "transopt/ResultAnalysis/AnalysisReport.py",
    "content": "import os\nfrom pdf2image import convert_from_path\nfrom transopt.ResultAnalysis.ReportNote import Notes\n\n\ndef pdf_to_png(pictures_path):\n    assert os.path.exists(pictures_path), \"File 'Pictures' isn't exist!\"\n    pdf_files = [f for f in os.listdir(pictures_path) if f.endswith('.pdf')]\n    pictures = []\n\n    for pdf_file in pdf_files:\n        pdf_path = os.path.join(pictures_path, pdf_file)\n        images = convert_from_path(pdf_path, dpi=1000, fmt='png')\n        for image in images:\n            image.save(os.path.join(pictures_path, f\"{pdf_file.split('.')[0]}.png\"), 'png')\n        pictures.append(pdf_file.split('.')[0])\n    return pictures\n\n\ndef create_details_report(details_folders, save_path):\n    for details_folder in details_folders:\n        html_begin = f\"\"\"\n        <!DOCTYPE html>\n        <html>\n            <head>\n                <meta charset=\"UTF-8\">\n                <title> {details_folder.title().replace('_', ' ')} </title>\n        \"\"\"\n        html_begin += \"\"\"\n                <style>\n                    body {\n                        align-items: center;\n                        text-align: center;\n                    }\n\n                    .title_container {\n                        background-color: #024098;\n                        height: 100px;\n                        line-height: 100px;\n                    }\n\n                    .title {\n                        color: white;\n                        /* display: flex; */\n                        align-items: center;\n                        justify-content: center;\n                        white-space: nowrap;\n                    }\n\n                    .container {\n                        display: flex;\n                        flex-wrap: wrap;\n                        /* align-items: center; */\n                        justify-content: center;\n                    }\n\n                    .figure {\n                        margin-left: 10px;\n                        margin-right: 10px;\n                    }\n\n                    .button {\n                        display: inline-block;\n                        border-radius: 7px;\n                        background: #024098;\n                        color: white;\n                        text-align: center;\n                        font-size: 20px;\n                        width: 100px;\n                        height: 30px;\n                        cursor: pointer;\n                        text-decoration: none;\n                        margin-top: 15px;\n                    }\n                </style>\n            </head>\n        \"\"\"\n        html_begin += f\"\"\"\n        <body>\n            <div class=\"title_container\">\n                <h1 class=\"title\">{details_folder.title().replace('_', ' ')}</h1>\n            </div>\n\n            <a class=\"button\" href=\"../Report.html\">Back</a>\n        \"\"\"\n        html_end = \"\"\"\n        </body>\n        </html>\n        \"\"\"\n\n        pictures_path = save_path / details_folder\n        pictures = pdf_to_png(pictures_path)\n        function_name = set()\n        html_content = \"\"\"\"\"\"\n        for picture in pictures:\n            # 将同一种函数归为一类\n            if picture.split('_')[0] not in function_name:\n                if len(function_name) != 0:\n                    html_content += \"\"\"\n                        </div>\n\n                    \"\"\"\n                function_name.add(picture.split('_')[0])\n                html_content += f\"\"\"\n                    <h2>{picture.split('_')[0]}</h2>\n                    <div class=\"container\">\n                \"\"\"\n            html_content += f\"\"\"\n                    <a class=\"figure\" href=\"{picture}.png\"><IMG SRC=\"{picture}.png\" width=\"350px\"></a>\n            \"\"\"\n\n        with open(pictures_path / f\"{details_folder.title().replace('_', ' ')}.html\", 'w', encoding='utf-8') as html:\n            html.write(html_begin + html_content + html_end)\n\n\ndef create_table_report(save_path):\n    html_begin = \"\"\"\n    <!DOCTYPE html>\n    <html>\n\n    <head>\n        <meta charset=\"UTF-8\">\n        <title> Tables </title>\n\n        <style>\n            body {\n                align-items: center;\n                text-align: center;\n            }\n\n            .title_container {\n                background-color: #024098;\n                height: 100px;\n                line-height: 100px;\n            }\n\n            .title {\n                color: white;\n                align-items: center;\n                justify-content: center;\n                white-space: nowrap;\n            }\n\n            .container {\n                display: flex;\n                flex-direction: column;\n                align-items: center;\n                justify-content: center;\n            }\n\n            .report_container {\n                padding: 50px;\n            }\n\n            .report {\n                width: 1050px;\n                border: 3px solid #024098;\n                border-radius: 15px;\n            }\n\n            .report_title {\n                color: white;\n                background-color: #024098;\n                border-radius: 10px 10px 0 0;\n                height: 50px;\n                line-height: 50px;\n                margin: 0;\n            }\n\n            .content {\n                padding-top: 15px;\n                padding-left: 10px;\n                padding-right: 10px;\n            }\n\n            .report_figure {\n                align-items: center;\n            }\n\n            .report_note {\n                text-align: justify;\n                margin: 10px;\n            }\n\n            .button {\n                display: inline-block;\n                border-radius: 7px;\n                background: #024098;\n                color: white;\n                text-align: center;\n                font-size: 20px;\n                width: 100px;\n                height: 30px;\n                cursor: pointer;\n                text-decoration: none;\n                margin-top: 15px;\n            }\n        </style>\n    </head>\n\n    <body>\n        <div class=\"title_container\">\n            <h1 class=\"title\">Tables</h1>\n        </div>\n\n        <a class=\"button\" href=\"../../Report.html\">Back</a>\n\n        <div class=\"container\">\n    \"\"\"\n    html_end = \"\"\"\n        </div>\n    </body>\n\n    </html>\n    \"\"\"\n\n    tables = pdf_to_png(save_path)\n    html_content = \"\"\"\"\"\"\n    for table in tables:\n        report_container = f\"\"\"\n                <div class=\"report_container\">\n                    <div class=\"report\">\n                        <h2 class=\"report_title\">{table.title().replace('_', ' ')}</h2>\n                        <div class=\"content\">\n                            <div class=\"report_figure\">\n                                <a href=\"{table}.png\"><IMG\n                                        SRC=\"{table}.png\" width=\"1000px\"></a>\n                            </div>\n                        </div>\n                    </div>\n                </div>\n        \"\"\"\n        html_content += report_container\n\n    with open(save_path / 'Tables.html', 'w', encoding='utf-8') as html:\n        html.write(html_begin + html_content + html_end)\n\n\ndef create_report(save_path):\n    html_begin = \"\"\"\n    <!DOCTYPE html>\n    <html>\n        <head>\n            <meta charset=\"UTF-8\">\n            <title> Analysis Report </title>\n\n            <style>\n                body {\n                    align-items: center;\n                    text-align: center;\n                }\n\n                .title_container {\n                    background-color: #024098;\n                    height: 100px;\n                    line-height: 100px;\n                }\n\n                .title {\n                    color: white;\n                    align-items: center;\n                    justify-content: center;\n                    white-space: nowrap;\n                }\n\n                .container {\n                    display: flex;\n                    flex-wrap: wrap;\n                    justify-content: center;\n                }\n\n                .report_container  {\n                    padding: 50px;\n                }\n\n                .report {\n                    width: 520px;\n                    height: 600px;\n                    border: 3px solid #024098;\n                    border-radius: 15px;\n                }\n\n                .report_title {\n                    color: white;\n                    background-color: #024098;\n                    border-radius: 10px 10px 0 0;\n                    height: 50px;\n                    line-height: 50px;\n                    margin: 0;\n                }\n\n                .content {\n                    padding-top: 15px;\n                    padding-left: 10px;\n                    padding-right: 10px;\n                }\n\n                .report_figure{\n                    align-items: center;\n                }\n\n                .report_note {\n                    text-align: justify;\n                    margin: 10px;\n                }\n\n                .button {\n                    display: inline-block;\n                    border-radius: 7px;\n                    background: #024098b0;\n                    color: white;\n                    text-align: center;\n                    font-size: 20px;\n                    width: 400px;\n                    height: 30px;\n                    cursor: pointer;\n                    text-decoration: none;\n                    margin-top: 15px;\n                }\n            </style>\n        </head>\n    <body>\n        <div class=\"title_container\">\n            <h1 class=\"title\">Analysis Rusults Report</h1>\n        </div>\n\n        <div class=\"container\">\n    \"\"\"\n    html_end = \"\"\"\n        </div>\n    </body>\n    </html>\n    \"\"\"\n\n    # 读取生成的图片名，并写入html\n    pictures_path = save_path / 'Overview' / 'Pictures'\n    pictures = pdf_to_png(pictures_path)\n    html_content = \"\"\"\"\"\"\n    for picture in pictures:\n        report_container = f\"\"\"\n        <div class=\"report_container\">\n            <div class=\"report\">\n                <h2 class=\"report_title\">{picture.title().replace('_', ' ')}</h2>\n                <div class=\"content\">\n                    <div class=\"report_figure\">\n                        <a href=\"Overview/Pictures/{picture}.png\"><IMG SRC=\"Overview/Pictures/{picture}.png\" width=\"450px\"></a>\n                    </div>\n                    <div class=\"report_note\">\n                        <p><b>Note:</b> {Notes[picture]} </p>\n                    </div>\n                </div>\n            </div>\n        </div>\n        \"\"\"\n        html_content += report_container\n\n    # 更多 information 的链接\n    table_path = save_path / 'Overview' / 'Table'\n    create_table_report(table_path)\n    report_container = \"\"\"\n        <div class=\"report_container\">\n            <div class=\"report\">\n                <h2 class=\"report_title\">More Information</h2>\n                <div class=\"content\">\n                    <a class=\"button\" href=\"Overview/Table/Tables.html\">Tables</a>\n    \"\"\"\n\n    folders = [f for f in os.listdir(save_path) if os.path.isdir(os.path.join(save_path, f))]\n    details_folders = [f for f in folders if f != 'Overview']\n    if len(details_folders) != 0:\n        create_details_report(details_folders, save_path)\n        for details_folder in details_folders:\n            report_container += f\"\"\"\n                    <a class=\"button\" href=\"{details_folder}/{details_folder.title().replace('_', ' ')}.html\">{details_folder.title().replace('_', ' ')}</a>\n            \"\"\"\n\n    report_container += \"\"\"\n                </div>\n            </div>\n        </div>\n    \"\"\"\n    html_content += report_container\n\n    with open(save_path / 'Report.html', 'w', encoding='utf-8') as html:\n        html.write(html_begin + html_content + html_end)\n\n"
  },
  {
    "path": "transopt/ResultAnalysis/CasualAnalysis.py",
    "content": "\nfrom transopt.ResultAnalysis.PlotAnalysis import plot_registry\nfrom transopt.ResultAnalysis.TableAnalysis import table_registry\nfrom transopt.ResultAnalysis.AnalysisBase import AnalysisBase\nfrom transopt.ResultAnalysis.AnalysisReport import create_report\n\n\ndef casual_analysis(Exper_folder, tasks, methods, seeds, args):\n    ab = AnalysisBase(Exper_folder, tasks=tasks,methods= methods,seeds= seeds)\n    ab.read_data_from_kb()\n    Exper_folder = Exper_folder / 'analysis'\n\n\n    \n"
  },
  {
    "path": "transopt/ResultAnalysis/CompileTex.py",
    "content": "import os\nimport subprocess\nimport shutil\n\n\ndef compile_tex(tex_path, output_folder):\n    # 保存当前工作目录\n    original_cwd = os.getcwd()\n\n    # 将路径转换为绝对路径\n    tex_path = os.path.abspath(tex_path)\n    output_folder = os.path.abspath(output_folder)\n\n    # 获取文件名和文件夹路径\n    folder, filename = os.path.split(tex_path)\n    name, _ = os.path.splitext(filename)\n\n    # 切换到tex文件所在的文件夹\n    os.chdir(folder)\n\n    try:\n        # 编译tex文件\n        subprocess.run(['pdflatex', filename], check=True)\n\n        # 裁剪PDF文件\n        pdf_path = os.path.join(folder, name + '.pdf')\n        cropped_pdf_path = pdf_path.replace('.pdf', '-crop.pdf')\n        subprocess.run(['pdfcrop', pdf_path, cropped_pdf_path], check=True)\n\n        # 将裁剪后的PDF文件移动到输出文件夹，并去掉-crop\n        output_pdf_path = os.path.join(output_folder, name + '.pdf')\n        shutil.move(cropped_pdf_path, output_pdf_path)\n\n    except subprocess.CalledProcessError as e:\n        print(f\"命令执行失败: {e}\")\n    finally:\n        # 切换回原始工作目录\n        os.chdir(original_cwd)\n\n    # 删除.aux和.log文件以及未裁剪的PDF文件\n    aux_path = os.path.join(folder, name + '.aux')\n    log_path = os.path.join(folder, name + '.log')\n    if os.path.exists(aux_path):\n        os.remove(aux_path)\n    if os.path.exists(log_path):\n        os.remove(log_path)\n    if os.path.exists(pdf_path):\n        os.remove(pdf_path)"
  },
  {
    "path": "transopt/ResultAnalysis/CorrelationAnalysis.py",
    "content": "\n\nimport numpy as np\nimport dcor\nfrom sklearn.metrics import mutual_info_score\nfrom transopt.ResultAnalysis.AnalysisBase import AnalysisBase\nfrom transopt.utils.Normalization import normalize\n\ndef correlation_analysis(Exper_folder, tasks, methods, seeds, args):\n    ab = AnalysisBase(Exper_folder, tasks=tasks,methods= methods,seeds= seeds, args=args)\n    ab.read_data_from_kb()\n    task_names = ab.get_task_names()\n    for method in methods:\n        for seed in seeds:\n            for task in task_names:\n                a = MutualInformation(ab, task, method, seed)\n    Exper_folder = Exper_folder / 'analysis'\n\ndef MutualInformation(ab:AnalysisBase, dataset_name, method, seed):\n    results = ab.get_results_by_order(['method', 'seed', 'task'])\n    res = results[method][seed][dataset_name]\n    Y = res.Y\n    num_objective = Y.shape[0]\n\n    mi = mutual_info_score(normalize(Y[0]), normalize(Y[1]))\n    distance_corr = dcor.distance_correlation(normalize(Y[0]), normalize(Y[1]))\n\n    print(\"Distance Correlation:\", distance_corr)\n    print(\"Mutual Information:\", mi)\n\n\n"
  },
  {
    "path": "transopt/ResultAnalysis/MakeGif.py",
    "content": "import os\nfrom PIL import Image\n\ndef make_gif(folder_path):\n    # 获取文件夹中的所有图片文件\n    image_files = [file for file in os.listdir(folder_path) if file.endswith('.png')]\n\n    # 按照图片序号进行排序\n    image_files.sort(key=lambda x: int(x.split('_')[0]))\n\n    images = []\n    for file in image_files:\n        # 读取每个图片文件\n        image_path = os.path.join(folder_path, file)\n        image = Image.open(image_path)\n\n        # 将图片添加到列表中\n        images.append(image)\n\n    # 设置保存 GIF 的文件路径和名称\n    gif_path = os.path.join(folder_path, 'animation.gif')\n\n    # 将图片列表保存为 GIF 动画\n    images[0].save(gif_path, save_all=True, append_images=images[1:], duration=1000, loop=0,)\n\nif __name__ == '__main__':\n    task_list_2d = [\n        'Ackley_10_s',\n        # 'StyblinskiTang_10_s',\n        'MPB5_10_s',\n        'LevyR_10_s',\n        # 'SVM_10_s',\n    ]\n    task_list_5d = [\n        # 'Ackley_10_s',\n        # 'StyblinskiTang_10_s',\n        # 'MPB5_10_s',\n        # 'LevyR_10_s',\n        'NN_72_s',\n    ]\n\n    task_list_8d = [\n        # 'Ackley_10_s',\n        # 'StyblinskiTang_10_s',\n        # 'MPB5_10_s',\n        'LevyR_10_s',\n        # 'XGB_10_s',\n    ]\n\n    Dim_ = 2\n    Method_list = [\n        # 'INC_MHGP',\n        # 'WS_RGPE',\n        # 'MT_MOGP',\n        # 'LFL_MOGP',\n        # 'ELLA_GP',\n        # 'BO_GP',\n        'TMTGP'\n    ]\n    # Seed_list = list(range(10))\n    Seed_list = [0]\n\n    Exp_name = 'test5'\n    Exper_floder = '../../LFL_experiments/{}'.format(Exp_name)\n\n    if Dim_ == 2:\n        task_list = task_list_2d\n    elif Dim_ == 5:\n        task_list = task_list_5d\n    elif Dim_ == 8:\n        task_list = task_list_8d\n\n    # for Method in Method_list:\n    #     for Prob in task_list:\n    #         for seed in Seed_list:\n    #             for i in range(int(Prob.split('_')[1])):\n    #                 make_gif(Exper_floder+f\"/figs/contour/{Method}/{seed}/{Prob.split('_')[0]}_{i}_{Prob.split('_')[2]}\")\n\n    for Method in Method_list:\n        for Prob in task_list:\n            for seed in Seed_list:\n                for i in range(int(Prob.split('_')[1])):\n                    make_gif(Exper_floder+f\"/figs/contour/{Method}/{seed}/{Prob.split('_')[0]}_{i}_{Prob.split('_')[2]}\")"
  },
  {
    "path": "transopt/ResultAnalysis/PFAnalysis.py",
    "content": "import numpy as np\nfrom sklearn.metrics import mutual_info_score\n\nfrom transopt.ResultAnalysis.AnalysisBase import AnalysisBase\nfrom transopt.utils.Normalization import normalize\n\n\ndef parego_analysis(Exper_folder, tasks, methods, seeds, args):\n    ab = AnalysisBase(Exper_folder, tasks=tasks,methods= methods,seeds= seeds, args=args)\n    ab.read_data_from_kb()\n    task_names = ab.get_task_names()\n    for method in methods:\n        for seed in seeds:\n            for task in task_names:\n                a = MutualInformation(ab, task, method, seed)\n    Exper_folder = Exper_folder / 'analysis'"
  },
  {
    "path": "transopt/ResultAnalysis/PlotAnalysis.py",
    "content": "import numpy as np\nfrom collections import Counter, defaultdict\nfrom transopt.ResultAnalysis.AnalysisBase import AnalysisBase\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import MultipleLocator\nimport pandas as pds\nimport os\nimport seaborn as sns\nfrom transopt.utils.sk import Rx\nfrom pathlib import Path\nimport scipy\nimport tikzplotlib\nfrom sklearn.cluster import DBSCAN\nfrom transopt.ResultAnalysis.CompileTex import compile_tex\nimport matplotlib.gridspec as gridspec\nplot_registry = {}\nimport re\n\n# 注册函数的装饰器\ndef plot_register(name):\n    def decorator(func_or_class):\n        if name in plot_registry:\n            raise ValueError(f\"Error: '{name}' is already registered.\")\n        plot_registry[name] = func_or_class\n        return func_or_class\n    return decorator\n\n\n\n@plot_register('sk')\ndef plot_sk(ab:AnalysisBase, save_path:Path):\n    cr_results = {}\n    results = ab.get_results_by_order([\"task\", \"method\", \"seed\"])\n\n    for task_name, tasks_r in results.items():\n        result = {}\n        for method, method_r in tasks_r.items():\n            cr_list = []\n            for seed, result_obj in method_r.items():\n                cr = result_obj.best_Y\n                cr_list.append(cr)\n            result[method] = cr_list\n\n        a = Rx.data(**result)\n        RES = Rx.sk(a)\n        for r in RES:\n            if r.rx in cr_results:\n                cr_results[r.rx].append(r.rank)\n            else:\n                cr_results[r.rx] = [r.rank]\n\n    df = pds.DataFrame(cr_results)\n\n    sns.set_theme(style=\"whitegrid\", font='FreeSerif')\n    plt.figure(figsize=(12, 7.3))\n    # plt.ylim(bottom=0.9, top=len(method_names)+0.1)\n    ax = plt.gca()  # 获取坐标轴对象\n    y_major_locator = MultipleLocator(1)  # 设置坐标的主要刻度间隔\n    ax.yaxis.set_major_locator(y_major_locator)  # 应用在纵坐标上\n    sns.violinplot(data=df, inner=\"quart\")\n    plt.title('Skott knott', fontsize=30, y=1.01)\n    plt.xlabel('Algorithm Name', fontsize=25, labelpad=-7)\n    plt.ylabel('Rank', fontsize=25)\n    plt.yticks(fontsize=20)\n    plt.xticks(fontsize=20, rotation=10)\n\n    save_path = Path(save_path / 'Overview')\n    pdf_path = Path(save_path / 'Pictures')\n    tex_path = Path(save_path / 'tex')\n    save_path.mkdir(parents=True, exist_ok=True)\n    pdf_path.mkdir(parents=True, exist_ok=True)\n    tex_path.mkdir(parents=True, exist_ok=True)\n    tikzplotlib.save(tex_path / \"scott_knott.tex\")\n\n    with open(tex_path / \"scott_knott.tex\", 'r', encoding='utf-8') as f:\n        content = f.read()\n\n    # 添加preamble和end document\n    preamble = r\"\\documentclass{article}\" + \"\\n\" + \\\n               r\"\\usepackage{pgfplots}\" + \"\\n\" + \\\n               r\"\\usepackage{tikz}\" + \"\\n\" + \\\n               r\"\\begin{document}\" + \"\\n\" + \\\n               r\"\\pagestyle{empty}\" + \"\\n\"\n    end_document = r\"\\end{document}\" + \"\\n\"\n    # 替换 false 为 true\n    content = re.sub(r'majorticks=false', 'majorticks=true', content)\n    pattern = r'axis line style={lightgray204},\\n'\n    content = re.sub(pattern, '', content)\n    # 插入字号控制\n    insert_text = r\"font=\\large,\" + \"\\n\" + \\\n                  r\"tick label style={font=\\small},\" + \"\\n\" + \\\n                  r\"label style={font=\\normalsize},\" + \"\\n\"\n    insert_position = content.find(r'tick align=outside,')\n    modified_content = content[:insert_position] + insert_text + content[insert_position:]\n\n    # 将修改后的内容写回文件\n    with open(tex_path / \"scott_knott.tex\", 'w', encoding='utf-8') as f:\n        f.write(preamble + modified_content + end_document)\n\n    compile_tex(tex_path / \"scott_knott.tex\", pdf_path)\n    plt.close()\n\n\n\n@plot_register('cr')\ndef convergence_rate(ab:AnalysisBase, save_path:Path, **kwargs):\n    cr_list = []\n    cr_all = {}\n    cr_results = {}\n    def acc_iter(Y, anchor_value):\n        for i in range(1, len(Y)):\n            best_fn = np.min(Y[:i])\n            if best_fn <= anchor_value:\n                return i/len(Y)\n        return 1\n\n    results = ab.get_results_by_order([\"method\", \"seed\", \"task\"])\n    best_Y_values = defaultdict(list)\n\n    # 遍历 data 字典，收集 best_Y 值\n    for method, tasks in results.items():\n        for seed, task_seed in tasks.items():\n            for task_name, result_obj in task_seed.items():\n                best_Y = result_obj.best_Y\n                if best_Y is not None:\n                    best_Y_values[task_name].append(best_Y)\n\n    # 计算并返回每个 task_name 下 best_Y 值的 3/4 分位数\n    quantiles = {task_name: np.percentile(values, 75) for task_name, values in best_Y_values.items()}\n\n    for method, tasks in results.items():\n        for seed, task_seed in tasks.items():\n            for task_name, result_obj in task_seed.items():\n                Y = result_obj.Y\n                if Y is None:\n                    raise ValueError(f\"Y is not set for method {method}, task {task_name}\")\n\n                cr = acc_iter(Y, anchor_value=quantiles[task_name])\n                cr_list.append(cr)\n\n        cr_all[method] = cr_list\n\n    a = Rx.data(**cr_all)\n    RES = Rx.sk(a)\n    for r in RES:\n        if r.rx in cr_results:\n            cr_results[r.rx].append(r.rank)\n        else:\n            cr_results[r.rx] = [r.rank]\n\n    cr_results = pds.DataFrame(cr_results)\n\n    sns.set_theme(style=\"whitegrid\", font='FreeSerif')\n    plt.figure(figsize=(12, 7.3))\n    # plt.ylim(bottom=0.9, top=len(method_names)+0.1)\n    ax = plt.gca()  # 获取坐标轴对象\n    y_major_locator = MultipleLocator(1)  # 设置坐标的主要刻度间隔\n    ax.yaxis.set_major_locator(y_major_locator)  # 应用在纵坐标上\n    sns.violinplot(data=cr_results, inner=\"quart\")\n    plt.title('Convergence Rate', fontsize=30, y=1.01)\n    plt.xlabel('Algorithm Name', fontsize=25, labelpad=-7)\n    plt.ylabel('Rate', fontsize=25)\n    plt.yticks(fontsize=20)\n    plt.xticks(fontsize=20, rotation=10)\n\n    save_path = Path(save_path / 'Overview')\n    pdf_path = Path(save_path / 'Pictures')\n    tex_path = Path(save_path / 'tex')\n    save_path.mkdir(parents=True, exist_ok=True)\n    pdf_path.mkdir(parents=True, exist_ok=True)\n    tex_path.mkdir(parents=True, exist_ok=True)\n    tikzplotlib.save(tex_path / \"convergence_rate.tex\")\n\n    with open(tex_path / \"convergence_rate.tex\", 'r', encoding='utf-8') as f:\n        content = f.read()\n\n    # 添加preamble和end document\n    preamble = r\"\\documentclass{article}\" + \"\\n\" + \\\n               r\"\\usepackage{pgfplots}\" + \"\\n\" + \\\n               r\"\\usepackage{tikz}\" + \"\\n\" + \\\n               r\"\\begin{document}\" + \"\\n\" + \\\n               r\"\\pagestyle{empty}\" + \"\\n\"\n    end_document = r\"\\end{document}\" + \"\\n\"\n    # 替换 false 为 true\n    content = re.sub(r'majorticks=false', 'majorticks=true', content)\n    pattern = r'axis line style={lightgray204},\\n'\n    content = re.sub(pattern, '', content)\n    # 插入字号控制\n    insert_text = r\"font=\\large,\" + \"\\n\" + \\\n                  r\"tick label style={font=\\small},\" + \"\\n\" + \\\n                  r\"label style={font=\\normalsize},\" + \"\\n\"\n    insert_position = content.find(r'tick align=outside,')\n    modified_content = content[:insert_position] + insert_text + content[insert_position:]\n\n    # 将修改后的内容写回文件\n    with open(tex_path / \"convergence_rate.tex\", 'w', encoding='utf-8') as f:\n        f.write(preamble + modified_content + end_document)\n\n    compile_tex(tex_path / \"convergence_rate.tex\", pdf_path)\n    plt.close()\n\n\ndef save_traj_data(ab, save_path):\n    # 先找出所有的任务名称\n    results = ab.get_results_by_order([\"task\", \"method\", \"seed\"])\n\n    for task_name, tasks_r in results.items():\n        # 为每个任务创建一个字典来存储数据\n        task_data = {}\n\n        for method, method_r in tasks_r.items():\n            res = []\n\n            for seed, result_obj in method_r.items():\n                Y = result_obj.Y\n                if Y is not None:\n                    res.append(np.minimum.accumulate(Y).flatten())\n\n            if res:\n                # 计算中位数和标准差\n                res_array = np.array(res)\n                median = np.median(res_array, axis=0)\n                std = np.std(res_array, axis=0)\n\n                # 将数据存储到字典中\n                task_data[f'{method}_mean'] = median\n                task_data[f'{method}_low'] = median - std\n                task_data[f'{method}_high'] = median + std\n\n        if task_data:\n            # 创建保存路径\n            os.makedirs(save_path / 'traj'/ 'tex', exist_ok=True)\n\n            # 设置文件路径\n            file_path = save_path / 'traj'/ 'tex'/ f\"{task_name}.dat\"\n\n            # 获取序号的起点\n            start_idx = ab._init\n\n            # 选择从 start_idx 开始的数据\n            end_idx = start_idx + len(median)\n            for key in task_data.keys():\n                task_data[key] = task_data[key][start_idx:end_idx]\n\n            # 将数据保存到文件\n            with open(file_path, 'w') as f:\n                # 写入列名\n                col_names = ' '.join(['id'] + list(task_data.keys()))\n                f.write(col_names + '\\n')\n\n                # 写入数据\n                for i in range(len(task_data[list(task_data.keys())[0]])):\n                    row_data = ' '.join([str(start_idx + i)] + [f'{x[i]:0.8f}' for x in task_data.values()])\n                    f.write(row_data + '\\n')\n\n            print(f\"Data saved for {task_name}\")\n\n@plot_register('traj')\ndef traj2latex(ab: AnalysisBase, save_path: Path):\n    # 从 ab 对象中获取任务名称和方法名称\n    save_traj_data(ab, save_path)\n    results = ab.get_results_by_order([\"task\", \"method\", \"seed\"])\n    methods = ab.get_methods()\n\n    # 从 ab 对象中获取 start_idx, y_max 和 y_min\n    start_idx = ab._init\n    end_idx = ab._end\n\n    # 创建保存路径\n    os.makedirs(save_path / 'traj' / 'tex', exist_ok=True)\n\n\n    # 设置文件路径\n    for task_name, tasks_r in results.items():\n        all_data = []\n\n        for method, method_r in tasks_r.items():\n            for seed, result_obj in method_r.items():\n                Y = result_obj.Y\n                if Y is not None:\n                    min_values = np.minimum.accumulate(Y)\n                    all_data.append(min_values.flatten())\n\n        if all_data:\n            all_data = np.concatenate(all_data)\n            y_min = np.min(all_data) - np.std(all_data)\n            y_max = np.max(all_data) + np.std(all_data)\n\n        tex_save_path = save_path / 'traj' / 'tex' / f\"{task_name}.tex\"\n        data_file = f\"{task_name}.dat\"\n        # 开始写入 LaTeX 代码\n        latex_code = f\"\"\"\n        \\\\documentclass{{article}}\n        \\\\usepackage{{pgfplots}}\n        \\\\usepackage{{tikz}}\n        \\\\usetikzlibrary{{intersections}}\n        \\\\usepackage{{helvet}}\n        \\\\usepackage[eulergreek]{{sansmath}}\n        \\\\usepgfplotslibrary{{fillbetween}}\n\n        \\\\begin{{document}}\n        \\\\pagestyle{{empty}}\n\n\n        \\\\pgfplotsset{{compat=1.12,every axis/.append style={{\n            font = \\\\large,\n            grid = major,\n            xlabel = {{\\\\# of FEs}},\n            ylabel = {{$f(\\\\mathbf{{x}}^\\\\ast)$}},\n            thick,\n            xmin={start_idx},\n            xmax={end_idx},  % Adjust as needed\n            ymin={y_min},\n            ymax={y_max},\n            line width = 1pt,\n            tick style = {{line width = 0.8pt}}\n        }}}}\n        \\pgfplotsset{{every plot/.append style={{very thin}}}}\n        \\\\begin{{tikzpicture}}\n            \\\\begin{{axis}}[\n                title={{${task_name}$}},\n                width=\\\\textwidth,\n                height=0.5\\\\textwidth,\n            ]\"\"\"\n\n\n        for method in methods:\n            # 这里需要根据你的数据文件的具体结构来调整\n            latex_code += f\"\"\"\n            \\\\addplot[color={{{ab.get_color_for_method(method)}}}, solid, line width=1pt]table [x = id, y = {method}_mean]{{{data_file}}};\n            \\\\addlegendentry{{{method}}};\n            \"\"\"\n\n        for method in methods:\n            # 这里需要根据你的数据文件的具体结构来调整\n\n            latex_code += f\"\"\"\n            \\\\addplot[color={{{ab.get_color_for_method(method)}}}, name path={method}_L, draw=none] table[x = id, y = {method}_low] {{{data_file}}};\n            \\\\addplot[color={{{ab.get_color_for_method(method)}}}, name path={method}_U, draw=none] table[x = id, y = {method}_high] {{{data_file}}};\n            \\\\addplot[color={{{ab.get_color_for_method(method)}}},opacity=0.3] fill between[of={method}_U and {method}_L];\n            \"\"\"\n\n        latex_code += f\"\"\"\n                    \\\\end{{axis}}\n            \\\\end{{tikzpicture}}\n        \\\\end{{document}}\"\"\"\n\n        # 将 LaTeX 代码保存到文件\n        with open(tex_save_path, 'w') as f:\n            f.write(latex_code)\n        try:\n            compile_tex(tex_save_path, save_path / 'traj')\n        except:\n            pass\n\n        print(f\"LaTeX code has been saved to {tex_save_path}\")\n\n\n\n@plot_register('violin')\ndef plot_violin(ab:AnalysisBase, save_path, **kwargs):\n    data = {'Method': [], 'Performance rank': []}\n    method_names = set()\n\n    results = ab.get_results_by_order([\"task\", \"seed\", \"method\"])\n\n    for task_name, task_r in results.items():\n        for seed, seed_r in task_r.items():\n            res = {}\n            for method, result_obj in seed_r.items():\n                method_names.add(method)\n                Y = result_obj.Y\n                if Y is not None:\n                    min_values = np.min(Y)\n                    res[method] = min_values\n            sorted_value = sorted(res.values())\n            for v_id, v in enumerate(sorted_value):\n                for k, vv in res.items():\n                    if v == vv:\n                        data['Method'].append(k)\n                        data['Performance rank'].append(v_id+1)\n\n    sns.set_theme(style=\"whitegrid\", font='FreeSerif')\n    plt.figure(figsize=(12, 7.3))\n    plt.ylim(bottom=0.9, top=len(method_names)+0.1)\n    ax = plt.gca()  # 获取坐标轴对象\n    y_major_locator = MultipleLocator(1)  # 设置坐标的主要刻度间隔\n    ax.yaxis.set_major_locator(y_major_locator)  # 应用在纵坐标上\n    sns.violinplot(x='Method', y='Performance rank', data=data,\n                   order=list(method_names),\n                   inner=\"box\", color=\"silver\", cut=0, linewidth=3)\n    plt.title('Violin plot', fontsize=30, y=1.01)\n    plt.xlabel('Algorithm Name', fontsize=25, labelpad=-7)\n    plt.ylabel('Performance rank', fontsize=25)\n    plt.yticks(fontsize=20)\n    plt.xticks(fontsize=20, rotation=10)\n\n    save_path = Path(save_path / 'Overview')\n    pdf_path = Path(save_path / 'Pictures')\n    tex_path = Path(save_path / 'tex')\n    save_path.mkdir(parents=True, exist_ok=True)\n    pdf_path.mkdir(parents=True, exist_ok=True)\n    tex_path.mkdir(parents=True, exist_ok=True)\n    tikzplotlib.save(tex_path / \"violin.tex\")\n    with open(tex_path / \"violin.tex\", 'r', encoding='utf-8') as f:\n        content = f.read()\n\n    # 添加preamble和end document\n    preamble = r\"\\documentclass{article}\" + \"\\n\" + \\\n               r\"\\usepackage{pgfplots}\" + \"\\n\" + \\\n               r\"\\usepackage{tikz}\" + \"\\n\" + \\\n               r\"\\begin{document}\" + \"\\n\" + \\\n               r\"\\pagestyle{empty}\" + \"\\n\"\n    end_document = r\"\\end{document}\" + \"\\n\"\n    # 替换 false 为 true\n    content = re.sub(r'majorticks=false', 'majorticks=true', content)\n    pattern = r'axis line style={lightgray204},\\n'\n    content = re.sub(pattern, '', content)\n    # 插入字号控制\n    insert_text = r\"font=\\large,\" + \"\\n\" + \\\n                  r\"tick label style={font=\\small},\" + \"\\n\" + \\\n                  r\"label style={font=\\normalsize},\" + \"\\n\"\n    insert_position = content.find(r'tick align=outside,')\n    modified_content = content[:insert_position] + insert_text + content[insert_position:]\n\n    # 将修改后的内容写回文件\n    with open(tex_path / \"violin.tex\", 'w', encoding='utf-8') as f:\n        f.write(preamble + modified_content + end_document)\n\n    compile_tex(tex_path / \"violin.tex\", pdf_path)\n    plt.close()\n\n\n@plot_register('box')\ndef plot_box(ab:AnalysisBase, save_path, **kwargs):\n    if 'mode' in kwargs:\n        mode = kwargs['mode']\n    else:\n        mode = 'median'\n    methods = set()\n\n    results = ab.get_results_by_order([\"method\", \"task\", \"seed\"])\n\n    result_list = []\n    for method, method_r in results.items():\n        methods.add(method)\n        result = []\n        for task, task_r in method_r.items():\n            best = []\n            for seed, result_obj in task_r.items():\n                if result_obj is not None:\n                    Y = result_obj.Y\n                    if Y is not None:\n                        min_values = np.min(Y)\n                        best.append(min_values)\n            if mode == 'median':\n                result.append(np.median(best))\n            elif mode == 'mean':\n                result.append(np.mean(best))\n        result_list.append(result)\n    result_list = np.array(result_list).T\n\n    ranks = np.array([scipy.stats.rankdata(x, method='min') for x in result_list])\n    df = pds.DataFrame(ranks, columns=methods)\n\n    sns.set_theme(style='whitegrid', font='FreeSerif')\n    plt.figure(figsize=(12, 8))\n    ax = plt.gca()\n    y_major_locator = MultipleLocator(1)\n    ax.yaxis.set_major_locator(y_major_locator)\n    sns.boxplot(df, color='#c2d0e9')\n    plt.title('Box plot', fontsize=30, y=1.03)\n    plt.xlabel('Algorithm Name', fontsize=25)\n    plt.ylabel('Rank', fontsize=25)\n    plt.xticks(fontsize=20, rotation=10)\n    plt.yticks(fontsize=20)\n\n    save_path = Path(save_path / 'Overview')\n    pdf_path = Path(save_path / 'Pictures')\n    tex_path = Path(save_path / 'tex')\n    save_path.mkdir(parents=True, exist_ok=True)\n    pdf_path.mkdir(parents=True, exist_ok=True)\n    tex_path.mkdir(parents=True, exist_ok=True)\n    tikzplotlib.save(tex_path / \"box.tex\")\n\n    with open(tex_path / \"box.tex\", 'r', encoding='utf-8') as f:\n        content = f.read()\n\n        # 添加preamble和end document\n    preamble = r\"\\documentclass{article}\" + \"\\n\" + \\\n               r\"\\usepackage{pgfplots}\" + \"\\n\" + \\\n               r\"\\usepackage{tikz}\" + \"\\n\" + \\\n               r\"\\begin{document}\" + \"\\n\" + \\\n               r\"\\pagestyle{empty}\" + \"\\n\"\n    end_document = r\"\\end{document}\" + \"\\n\"\n\n    content = re.sub(r'majorticks=false', 'majorticks=true', content)\n    pattern = r'axis line style={lightgray204},\\n'\n    content = re.sub(pattern, '', content)\n    insert_text = r\"font=\\large,\" + \"\\n\" + \\\n                  r\"tick label style={font=\\small},\" + \"\\n\" + \\\n                  r\"label style={font=\\normalsize},\" + \"\\n\"\n    insert_position = content.find(r'tick align=outside,')\n    modified_content = content[:insert_position] + insert_text + content[insert_position:]\n\n    # 将修改后的内容写回文件\n    with open(tex_path / \"box.tex\", 'w', encoding='utf-8') as f:\n        f.write(preamble + modified_content + end_document)\n\n    compile_tex(tex_path / \"box.tex\", pdf_path)\n\n    plt.close()\n\n\n@plot_register('dbscan')\ndef dbscan_analysis(ab: AnalysisBase, save_path, **kwargs):\n    results = ab.get_results_by_order(['task', 'method', 'seed'])\n    tasks_names = set()\n    method_names = set()\n    result_of_n_clusters = {}\n    result_of_noise_points = {}\n    result_of_avg_cluster_size = {}\n\n    for task_name, task_r in results.items():\n        tasks_names.add(task_name)\n        result_of_n_clusters[task_name] = defaultdict(dict)\n        result_of_noise_points[task_name] = defaultdict(dict)\n        result_of_avg_cluster_size[task_name] = defaultdict(dict)\n        for method, method_r in task_r.items():\n            method_names.add(method)\n            result_of_n_clusters[task_name][method] = []\n            result_of_noise_points[task_name][method] = []\n            result_of_avg_cluster_size[task_name][method] = []\n            for seed, result_obj in method_r.items():\n                if result_obj is not None:\n                    Y = result_obj.Y\n                    X = result_obj.X\n                    if Y is not None:\n                        db = DBSCAN(eps=0.5, min_samples=5)\n                        # 执行聚类\n                        db.fit(X)\n                        # 获取聚类标签\n                        labels = db.labels_\n                        # 计算簇的数量（忽略噪声点，其标签为 -1）\n                        n_clusters = len(set(labels)) - (1 if -1 in labels else 0)\n                        cluster_sizes = Counter(labels)\n                        noise_points = cluster_sizes[-1]  # 标签为 -1 的点是噪声点\n                        # 计算平均簇大小（不包括噪声点）\n                        if n_clusters > 0:\n                            t_size = 0\n                            for ids, cs in cluster_sizes.items():\n                                if ids >= 0:\n                                    t_size += cs\n                            avg_cluster_size = t_size / n_clusters\n                        else:\n                            avg_cluster_size = 0\n\n                        result_of_n_clusters[task_name][method].append(n_clusters)\n                        result_of_noise_points[task_name][method].append(noise_points)\n                        result_of_avg_cluster_size[task_name][method].append(avg_cluster_size)\n\n    def modify_lex_file(tex_path, pdf_path):\n        with open(tex_path, 'r', encoding='utf-8') as f:\n            content = f.read()\n\n            # 添加preamble和end document\n        preamble = r\"\\documentclass{article}\" + \"\\n\" + \\\n                   r\"\\usepackage{pgfplots}\" + \"\\n\" + \\\n                   r\"\\usepackage{tikz}\" + \"\\n\" + \\\n                   r\"\\begin{document}\" + \"\\n\" + \\\n                   r\"\\pagestyle{empty}\" + \"\\n\"\n        end_document = r\"\\end{document}\" + \"\\n\"\n\n        content = re.sub(r'majorticks=false', 'majorticks=true', content)\n        pattern = r'axis line style={lightgray204},\\n'\n        content = re.sub(pattern, '', content)\n        insert_text = r\"font=\\large,\" + \"\\n\" + \\\n                      r\"tick label style={font=\\small},\" + \"\\n\" + \\\n                      r\"label style={font=\\normalsize},\" + \"\\n\"\n        insert_position = content.find(r'tick align=outside,')\n        modified_content = content[:insert_position] + insert_text + content[insert_position:]\n\n        # 将修改后的内容写回文件\n        with open(tex_path, 'w', encoding='utf-8') as f:\n            f.write(preamble + modified_content + end_document)\n\n        compile_tex(tex_path, pdf_path)\n\n    tex_path = save_path / 'dbscan' / 'tex'\n    pdf_path = save_path / 'dbscan'\n    save_path.mkdir(parents=True, exist_ok=True)\n    pdf_path.mkdir(parents=True, exist_ok=True)\n    tex_path.mkdir(parents=True, exist_ok=True)\n    for task_name in tasks_names:\n        df_n_clusters = pds.DataFrame(result_of_n_clusters[task_name])\n        sns.set_theme(style='whitegrid', font='FreeSerif')\n        plt.figure(figsize=(12, 8))\n        ax = plt.gca()\n        y_major_locator = MultipleLocator(1)\n        ax.yaxis.set_major_locator(y_major_locator)\n        sns.boxplot(data=df_n_clusters, order=method_names)\n        plt.title('Clusters', fontsize=30, y=1.03)\n        plt.ylabel('number', fontsize=25)\n        plt.xticks(fontsize=20, rotation=10)\n        plt.yticks(fontsize=20)\n        tikzplotlib.save(tex_path / f\"{task_name}_n_clusters.tex\")\n        modify_lex_file(tex_path / f\"{task_name}_n_clusters.tex\", pdf_path)\n        plt.close()\n\n        df_noise_points = pds.DataFrame(result_of_noise_points[task_name])\n        plt.figure(figsize=(12, 8))\n        ax = plt.gca()\n        y_major_locator = MultipleLocator(1)\n        ax.yaxis.set_major_locator(y_major_locator)\n        sns.boxplot(data=df_noise_points, order=method_names)\n        plt.title('Noise Points', fontsize=30, y=1.03)\n        plt.ylabel('number', fontsize=25)\n        plt.xticks(fontsize=20, rotation=10)\n        plt.yticks(fontsize=20)\n        tikzplotlib.save(tex_path / f\"{task_name}_noise_points.tex\")\n        modify_lex_file(tex_path / f\"{task_name}_noise_points.tex\", pdf_path)\n        plt.close()\n\n        df_avg_cluster_size = pds.DataFrame(result_of_avg_cluster_size[task_name])\n        plt.figure(figsize=(12, 8))\n        ax = plt.gca()\n        y_major_locator = MultipleLocator(1)\n        ax.yaxis.set_major_locator(y_major_locator)\n        sns.boxplot(data=df_avg_cluster_size, order=method_names)\n        plt.title('Avg Cluster Size', fontsize=30, y=1.03)\n        plt.ylabel('number', fontsize=25)\n        plt.xticks(fontsize=20, rotation=10)\n        plt.yticks(fontsize=20)\n        tikzplotlib.save(tex_path / f\"{task_name}_avg_cluster_size.tex\")\n        modify_lex_file(tex_path / f\"{task_name}_avg_cluster_size.tex\", pdf_path)\n        plt.close()\n\n\n@plot_register('heatmap')\ndef plot_heatmap(ab:AnalysisBase, save_path, **kwargs):\n    results = ab.get_results_by_order(['method', 'task', 'seed'])\n    methods = ab.get_methods()\n    tasks = list(ab.get_task_names())\n\n    # Step 1: Calculate the best result for each method, task, and seed\n    best_results = {method: {task: [] for task in tasks} for method in methods}\n    for method in methods:\n        for task in tasks:\n            for seed, result_obj in results[method][task].items():\n                if result_obj is not None and result_obj.Y is not None:\n                    best_results[method][task].append(result_obj.best_Y)\n\n    # Step 2: Calculate the mean of the best results for each method and task\n    mean_best_results = {method: {task: np.mean(best_results[method][task]) for task in tasks} for method in methods}\n\n    # Step 3: Create a dataframe for the heatmap\n    heatmap_data = pds.DataFrame(mean_best_results).T\n\n    # Step 4: Plot the heatmap\n    n_cols = len(tasks)\n    colormaps = ['Blues', 'Reds', 'Greens', 'Purples']  # Adjust as needed\n    fig = plt.figure(figsize=(n_cols * 2, 5))\n    gs = gridspec.GridSpec(1, n_cols, width_ratios=[1 for _ in range(n_cols)])\n    for col, task in enumerate(tasks):\n        ax = plt.subplot(gs[col])\n        sns.heatmap(heatmap_data[[task]], annot=True, cmap=colormaps[col % len(colormaps)], cbar=False, ax=ax, **kwargs)\n        if col == 0:\n            ax.set_ylabel('Method')\n        if col != 0:\n            ax.set_yticks([])\n            ax.set_yticklabels([])\n\n    plt.suptitle(\"Heatmap of Methods\")\n    fig.text(0.5, 0.01, 'Task Name', ha='center')\n    plt.tight_layout(rect=[0, 0.03, 1, 0.99])\n    save_path = Path(save_path / 'Overview')\n    png_path = Path(save_path / 'Pictures')\n    save_path.mkdir(parents=True, exist_ok=True)\n    png_path.mkdir(parents=True, exist_ok=True)\n\n    plt.savefig(png_path/'heatmap.png', format='png')\n\n"
  },
  {
    "path": "transopt/ResultAnalysis/ReportNote.py",
    "content": "# There are some explanation about figures and tables\r\nNotes = {\r\n    'box': 'The box plot compares the performance of different algorithms across all problems, primarily '\r\n           'displaying the minimum value, the first quartile, the third quartile, and the maximum value.',\r\n    'violin': 'The violin plot compares the performance of different algorithms across all problems, combining '\r\n              'elements of kernel density estimation and box plots to provide a more detailed view of data '\r\n              'distribution.',\r\n    'convergence_rate': 'Convergence rate refers to the speed at which an optimization algorithm converges to a '\r\n                        'solution when addressing a problem. A higher convergence rate is often seen as an '\r\n                        'advantage in algorithm performance.',\r\n    'scott_knott': '123',\r\n    'compare_convergence_rate': '321',\r\n    'compare_mean': '111',\r\n}"
  },
  {
    "path": "transopt/ResultAnalysis/TableAnalysis.py",
    "content": "import numpy as np\nfrom collections import defaultdict\nfrom transopt.utils.sk import Rx\nimport scipy\nfrom transopt.ResultAnalysis.TableToLatex import matrix_to_latex\nfrom transopt.ResultAnalysis.AnalysisBase import AnalysisBase\nfrom transopt.ResultAnalysis.CompileTex import compile_tex\nimport os\n\ntable_registry = {}\n\n# 注册函数的装饰器\ndef Tabel_register(name):\n    def decorator(func_or_class):\n        if name in table_registry:\n            raise ValueError(f\"Error: '{name}' is already registered.\")\n        table_registry[name] = func_or_class\n        return func_or_class\n    return decorator\n\n@Tabel_register('mean')\ndef record_mean_std(ab:AnalysisBase, save_path, **kwargs):\n    # Similar to record_mean_std function in PeerComparison.py\n    res_mean = {}\n    res_std = {}\n    res_sig = {}\n    results = ab.get_results_by_order([\"task\", \"method\", \"seed\"])\n    for task_name, task_r in results.items():\n        result_mean = []\n        result_std = []\n        data = {}\n        data_mean = {}\n        for method, method_r in task_r.items():\n            best = []\n            for seed, result_obj in method_r.items():\n                best.append(result_obj.best_Y)\n                data[method] = best.copy()\n                data_mean[method] = (np.mean(best), np.std(best))\n                result_mean.append(np.mean(best))\n                result_std.append(np.std(best))\n\n        res_mean[task_name] = result_mean\n        res_std[task_name] = result_std\n        rst_m = {}\n        sorted_dic = sorted(data_mean.items(), key=lambda kv: (kv[1][0]))\n        for method in ab.get_methods():\n            if method == sorted_dic[0][0]:\n                rst_m[method] = '-'\n                continue\n            s, p = scipy.stats.mannwhitneyu(data[sorted_dic[0][0]], data[method], alternative='two-sided')\n            if p < 0.05:\n                rst_m[method] = '+'\n            else:\n                rst_m[method] = '-'\n        res_sig[task_name] = rst_m\n    latex_code = matrix_to_latex({'mean':res_mean, 'std':res_std, 'significance':res_sig}, list(ab.get_task_names()), list(ab.get_methods()),\n                                 caption='Performance comparisons of the quality of solutions obtained by different algorithms.')\n    save_path = save_path / 'Overview'\n    os.makedirs(save_path, exist_ok=True)\n    tex_save_path = save_path / 'tex'\n    os.makedirs(tex_save_path, exist_ok=True)\n    table_path = save_path / 'Table'\n    os.makedirs(table_path, exist_ok=True)\n\n    with open(tex_save_path / f\"compare_mean.tex\", 'w') as f:\n        f.write(latex_code)\n    try:\n        compile_tex(tex_save_path / f\"compare_mean.tex\" , table_path)\n    except:\n        pass\n\n    print(f\"LaTeX code has been saved to {tex_save_path}\")\n\n@Tabel_register('cr')\ndef record_convergence_rate(ab:AnalysisBase, save_path, **kwargs):\n    # Similar to record_convergence function in PeerComparison.py\n    res_mean = {}\n    res_std = {}\n    res_sig = {}\n\n    def acc_iter(Y, anchor_value):\n        for i in range(1, len(Y)):\n            best_fn = np.min(Y[:i])\n            if best_fn <= anchor_value:\n                return i/len(Y)\n\n        return 1\n    # 遍历 data 字典，收集 best_Y 值\n    results = ab.get_results_by_order([\"method\", \"seed\", \"task\"])\n    best_Y_values = defaultdict(list)\n    for method, tasks in results.items():\n        for seed, task_seed in tasks.items():\n            for task_name, result_obj in task_seed.items():\n                best_Y = result_obj.best_Y\n                if best_Y is not None:\n                    best_Y_values[task_name].append(best_Y)\n\n    # 计算并返回每个 task_name 下 best_Y 值的 3/4 分位数\n    quantiles = {task_name: np.percentile(values, 75) for task_name, values in best_Y_values.items()}\n    results = ab.get_results_by_order([\"task\", \"method\", \"seed\"])\n    for task_name, task_r in results.items():\n        result_mean = []\n        result_std = []\n        data = {}\n        data_mean = {}\n        for method, method_r in task_r.items():\n            best = []\n            for seed, result_obj in method_r.items():\n                Y = result_obj.Y\n                if Y is None:\n                    raise ValueError(f\"Y is not set for method {method}, task {task_name}\")\n\n                cr = acc_iter(Y, anchor_value=quantiles[task_name])\n                best.append(cr)\n\n            data[method] = best.copy()\n            data_mean[method] = (np.mean(best), np.std(best))\n            result_mean.append(np.mean(best))\n            result_std.append(np.std(best))\n\n        res_mean[task_name] = result_mean\n        res_std[task_name] = result_std\n\n        rst_m = {}\n        sorted_dic = sorted(data_mean.items(), key=lambda kv: (kv[1][0]), reverse=False)\n        for method in ab.get_methods():\n            if method == sorted_dic[0][0]:\n                rst_m[method] = '-'\n                continue\n            s, p = scipy.stats.mannwhitneyu(data[sorted_dic[0][0]], data[method], alternative='two-sided')\n            if p < 0.05:\n                rst_m[method] = '+'\n            else:\n                rst_m[method] = '-'\n        res_sig[task_name] = rst_m\n    latex_code = matrix_to_latex({'mean': res_mean, 'std': res_std, 'significance': res_sig}, list(ab.get_task_names()),\n                                 list(ab.get_methods()),\n                                 caption='Convergence rate comparison among different algorithms.')\n    save_path = save_path / 'Overview'\n    os.makedirs(save_path, exist_ok=True)\n    tex_save_path = save_path / 'tex'\n    os.makedirs(tex_save_path, exist_ok=True)\n    table_path = save_path / 'Table'\n    os.makedirs(table_path, exist_ok=True)\n\n    with open(tex_save_path / f\"compare_convergence_rate.tex\", 'w') as f:\n        f.write(latex_code)\n    try:\n        compile_tex(tex_save_path / f\"compare_convergence_rate.tex\", table_path)\n    except:\n        pass\n\n    print(f\"LaTeX code has been saved to {tex_save_path}\")\n"
  },
  {
    "path": "transopt/ResultAnalysis/TableToLatex.py",
    "content": "import numpy as np\nfrom typing import Union, Dict\n\n\ndef matrix_to_latex(Data: Dict, col_names, row_names, caption, oder=\"min\"):\n    mean = Data[\"mean\"]\n    std = Data[\"std\"]\n    significance = Data[\"significance\"]\n    num_cols = len(mean.keys())\n    num_rows = len(row_names)\n\n    if len(col_names) != num_cols or len(row_names) != num_rows:\n        raise ValueError(\n            \"Mismatch between matrix dimensions and provided row/column names.\"\n        )\n\n    latex_code = []\n    # 添加文档类和宏包\n    latex_code.append(\"\\\\documentclass{article}\")\n    latex_code.append(\"\\\\usepackage{geometry}\")\n    latex_code.append(\"\\\\geometry{a4paper, margin=1in}\")\n    latex_code.append(\"\\\\usepackage{graphicx}\")\n    latex_code.append(\"\\\\usepackage{colortbl}\")\n    latex_code.append(\"\\\\usepackage{booktabs}\")\n    latex_code.append(\"\\\\usepackage{threeparttable}\")\n    latex_code.append(\"\\\\usepackage{caption}\")\n    latex_code.append(\"\\\\usepackage{xcolor}\")\n    latex_code.append(\"\\\\pagestyle{empty}\")\n\n    # 开始文档\n    latex_code.append(\"\\\\begin{document}\")\n    latex_code.append(\"\")\n    latex_code.append(\"\\\\begin{table*}[t!]\")\n    latex_code.append(\"    \\\\scriptsize\")\n    latex_code.append(\"    \\\\centering\")\n    latex_code.append(f\"    \\\\caption{{{caption}}}\")\n    latex_code.append(\"    \\\\resizebox{1.0\\\\textwidth}{!}{\")\n    latex_code.append(\"    \\\\begin{tabular}{c|\" + \"\".join([\"c\"] * (num_rows)) + \"}\")\n    latex_code.append(\"        \\\\hline\")\n\n    # Adding column names\n    col_header = \" & \".join([\"\"] + row_names) + \" \\\\\\\\\"\n    latex_code.append(\"        \" + col_header)\n    latex_code.append(\"        \\\\hline\")\n\n    # Adding rows\n    for i in range(num_cols):\n        str_data = []\n        for j in range(num_rows):\n            str_format = \"\"\n            if oder == \"min\":\n                if mean[col_names[i]][j] == np.min(mean[col_names[i]]):\n                    str_format += \"\\cellcolor[rgb]{ .682,  .667,  .667}\\\\textbf{\"\n                    str_format += \"%.3E(%.3E)\" % (\n                        float(mean[col_names[i]][j]),\n                        std[col_names[i]][j],\n                    )\n                    str_format += \"}\"\n                    str_data.append(str_format)\n                else:\n                    if significance[col_names[i]][row_names[j]] == \"+\":\n                        str_data.append(\n                            \"%.3E(%.3E)$^\\dagger$\"\n                            % (float(mean[col_names[i]][j]), std[col_names[i]][j])\n                        )\n                    else:\n                        str_data.append(\n                            \"%.3E(%.3E)\"\n                            % (float(mean[col_names[i]][j]), std[col_names[i]][j])\n                        )\n            else:\n                if mean[col_names[i]][j] == np.max(mean[col_names[i]]):\n                    str_format += \"\\cellcolor[rgb]{ .682,  .667,  .667}\\\\textbf{\"\n                    str_format += \"%.3E(%.3E)\" % (\n                        float(mean[col_names[i]][j]),\n                        std[col_names[i]][j],\n                    )\n                    str_format += \"}\"\n                    str_data.append(str_format)\n                else:\n                    if significance[col_names[i]][row_names[j]] == \"+\":\n                        str_data.append(\n                            \"%.3E(%.3E)$^\\dagger$\"\n                            % (float(mean[col_names[i]][j]), std[col_names[i]][j])\n                        )\n                    else:\n                        str_data.append(\n                            \"%.3E(%.3E)\"\n                            % (float(mean[col_names[i]][j]), std[col_names[i]][j])\n                        )\n        test_name = col_names[i].split(\"_\")[0] + col_names[i].split(\"_\")[1]\n        row_data = \" & \".join([\"\\\\texttt{\" + f\"{test_name}\" + \"}\"] + str_data) + \" \\\\\\\\\"\n        latex_code.append(\"        \" + row_data)\n\n    latex_code.append(\"        \\\\hline\")\n    latex_code.append(\"    \\\\end{tabular}\")\n    latex_code.append(\"    }\")\n    latex_code.append(\"    \\\\begin{tablenotes}\")\n    latex_code.append(\"        \\\\tiny\")\n    latex_code.append(\n        \"        \\\\item The labels in the first column are the combination of the first letter of test problem and the number of variables, e.g., A4 is Ackley problem with $n=4$.\"\n    )\n    latex_code.append(\n        \"        \\\\item $^\\\\dagger$ indicates that the best algorithm is significantly better than the other one according to the Wilcoxon signed-rank test at a 5\\\\% significance level.\"\n    )\n    latex_code.append(\"    \\\\end{tablenotes}\")\n    latex_code.append(\"\\\\end{table*}%\")\n    latex_code.append(\"\\\\end{document}\")\n\n    return \"\\n\".join(latex_code)\n"
  },
  {
    "path": "transopt/ResultAnalysis/TrackOptimization.py",
    "content": "import numpy as np\nfrom collections import Counter, defaultdict\nfrom transopt.ResultAnalysis.AnalysisBase import AnalysisBase\n\n\ntrack_registry = {}\n\n# 注册函数的装饰器\ndef track_register(name):\n    def decorator(func_or_class):\n        if name in track_registry:\n            raise ValueError(f\"Error: '{name}' is already registered.\")\n        track_registry[name] = func_or_class\n        return func_or_class\n    return decorator\n\n\n\n\n"
  },
  {
    "path": "transopt/ResultAnalysis/__init__.py",
    "content": ""
  },
  {
    "path": "transopt/__init__.py",
    "content": "\n\n\n"
  },
  {
    "path": "transopt/agent/__init__.py",
    "content": ""
  },
  {
    "path": "transopt/agent/app.py",
    "content": "import json\nimport os\nfrom multiprocessing import Process, Manager\n\nfrom flask import Flask, jsonify, request\nfrom flask_cors import CORS\nfrom services import Services\n\nfrom transopt.agent.registry import *\nfrom transopt.utils.log import logger\n\n\ndef create_app():\n    app = Flask(__name__)\n    \n    origins = os.getenv(\"CORS_ORIGINS\", \"*\")\n    CORS(app, resources={r\"/*\": {\"origins\": origins}})\n    \n    app.config['DEBUG'] = bool(os.getenv('DEBUG', True))\n    \n    manager = Manager()\n    task_queue = manager.Queue()\n    result_queue = manager.Queue()\n    db_lock = manager.Lock()\n\n    services = Services(task_queue, result_queue, db_lock)\n\n    @app.route(\"/api/generate-yaml\", methods=[\"POST\"])\n    def generate_yaml():\n        # try:\n        data = request.json\n        user_input = data.get(\"content\", {}).get(\"text\", \"\")\n        response_content = services.chat(user_input)\n        return jsonify({\"message\": response_content}), 200\n        # except Exception as e:\n        #     logger.error(f\"Error in generating YAML: {e}\")\n        #     return jsonify({\"error\": str(e)}), 500\n\n\n    @app.route(\"/api/Dashboard/tasks\", methods=[\"POST\"])\n    def report_send_tasks_information():\n        all_info = services.get_experiment_datasets()\n        all_tasks_info = []\n        for task_name, task_info in all_info:\n            info = task_info['additional_config']\n            info['problem_name'] = task_name\n            all_tasks_info.append(info)\n        \n        \n        return jsonify(all_tasks_info), 200\n\n\n    @app.route(\"/api/Dashboard/charts\", methods=[\"POST\"])\n    def report_update_charts_data():\n        data = request.json\n        user_input = data.get(\"taskname\", \"\")\n        charts = services.get_report_charts(user_input)\n        return jsonify(charts), 200\n\n\n    @app.route(\"/api/Dashboard/trajectory\", methods=[\"POST\"])\n    def report_update_trajectory_data():\n        data = request.json\n        user_input = data.get(\"taskname\", \"\")\n        # trajectory, 数据格式和以前一样 {\"TrajectoryData\":...}\n        charts = services.get_report_traj(user_input)\n        return jsonify(charts), 200\n\n\n    @app.route(\"/api/configuration/select_task\", methods=[\"POST\"])\n    def configuration_recieve_tasks():\n        tasks_info = request.json\n        # try:\n        services.receive_tasks(tasks_info) \n        # except Exception as e:\n        #     logger.error(f\"Error in searching dataset: {e}\")\n        #     return jsonify({\"error\": str(e)}), 500\n        \n        return {\"succeed\": True}, 200\n\n\n    @app.route(\"/api/configuration/select_algorithm\", methods=[\"POST\"])\n    def configuration_recieve_algorithm():\n        optimizer_info = request.json\n        print(optimizer_info)\n        # optimizer_info = {'SpaceRefiner': 'default', \n        #                   'SpaceRefinerParameters': '', \n        #                   'SpaceRefinerDataSelector': 'default', \n        #                   'SpaceRefinerDataSelectorParameters': '', \n        #                   'Sampler': 'default', \n        #                   'SamplerParameters': '', \n        #                   'SamplerInitNum': '11',\n        #                   'SamplerDataSelector': 'default', \n        #                   'SamplerDataSelectorParameters': '', \n        #                   'Pretrain': 'default', \n        #                   'PretrainParameters': '', \n        #                   'PretrainDataSelector': 'default', \n        #                   'PretrainDataSelectorParameters': '', \n        #                   'Model': 'default', \n        #                   'ModelParameters': '', \n        #                   'ModelDataSelector': 'default', \n        #                   'ModelDataSelectorParameters': '', \n        #                   'ACF': 'default', \n        #                   'ACFParameters': '', \n        #                   'ACFDataSelector': 'default', \n        #                   'ACFDataSelectorParameters': '', \n        #                   'Normalizer': 'default', \n        #                   'NormalizerParameters': '', \n        #                   'NormalizerDataSelector': 'default', \n        #                   'NormalizerDataSelectorParameters': ''}\n        try:\n            services.receive_optimizer(optimizer_info)\n        except Exception as e:\n            logger.error(f\"Error in searching dataset: {e}\")\n            return jsonify({\"error\": str(e)}), 500\n        \n        return {\"succeed\": True}, 200\n\n\n    @app.route(\"/api/configuration/basic_information\", methods=[\"POST\"])\n    def configuration_basic_information():\n        data = request.json\n        user_input = data.get(\"paremeter\", \"\")\n\n        task_data = services.get_modules()\n        # with open('transopt/agent/page_service_data/configuration_basic.json', 'r') as file:\n        #     data = json.load(file)\n        print(services)\n        return jsonify(task_data), 200\n\n\n    @app.route(\"/api/configuration/dataset\", methods=[\"POST\"])\n    def configuration_dataset():\n        metadata_info = request.json\n        # print(metadata_info)\n        # metadate_info = {\n        #     \"object\": \"Space refiner\",\n        #     \"datasets\": [\"dataset1\", \"dataset2]\n        # }\n        if metadata_info['object'] == 'Narrow Search Space':\n            metadata_info['object'] = 'SpaceRefiner'\n        elif metadata_info['object'] == 'Initialization':\n            metadata_info['object'] = 'Sampler'\n        elif metadata_info['object'] == 'Pre-train':\n            metadata_info['object'] = 'Pretrain'\n        elif metadata_info['object'] == 'Surrogate Model':\n            metadata_info['object'] = 'Model'\n        elif metadata_info['object'] == 'Acquisition Function':\n            metadata_info['object'] = 'ACF'\n        \n        try:\n            services.set_metadata(metadata_info)\n        except Exception as e:\n            logger.error(f\"Error in searching dataset: {e}\")\n            return jsonify({\"error\": str(e)}), 500\n        \n        return {\"succeed\": True}, 200\n\n\n    @app.route(\"/api/Dashboard/errorsubmit\", methods=[\"POST\"])\n    def errorsubmit():\n        try:\n\n            return {\"succeed\": True}, 200\n        except Exception as e:\n            logger.error(f\"Error in searching dataset: {e}\")\n            return {\"error\": False}, 200\n    \n    @app.route(\"/api/configuration/search_dataset\", methods=[\"POST\"])\n    def configuration_search_dataset():\n        try:\n            data = request.json\n\n            dataset_name = data[\"task_name\"]\n            if data['search_method'] == 'Fuzzy' or 'Hash':\n                dataset_info = {}\n            elif data['search_method'] == 'LSH':\n                dataset_info = {\n                    \"num_variables\": data[\"num_variables\"],\n                    \"num_objectives\": data[\"num_objectives\"],\n                    \"variables\": [\n                        {\"name\": var_name} for var_name in data[\"variables_name\"].split(\",\")\n                    ],\n                }\n            else:\n                pass\n            datasets = services.search_dataset(data['search_method'], dataset_name, dataset_info)\n\n            return jsonify(datasets), 200\n        except Exception as e:\n            logger.error(f\"Error in searching dataset: {e}\")\n            return jsonify({\"error\": str(e)}), 500\n\n\n    @app.route(\"/api/configuration/delete_dataset\", methods=[\"POST\"])\n    def configuration_delete_dataset():\n        metadata_info = request.json\n        datasets = metadata_info[\"datasets\"]\n        services.remove_dataset(datasets) \n        return {\"succeed\": True}, 200\n\n\n    @app.route(\"/api/configuration/run\", methods=[\"POST\"])\n    def configuration_run():\n        run_info = request.json\n        \n        if \"Seeds\" in run_info:\n            seeds = [int(seed) for seed in run_info['Seeds'].split(\",\")]\n        else:\n            seeds = [0]\n        services.run_optimize(seeds)  # Handle process creation within run_optimize\n        \n        return jsonify({\"isSucceed\": True}), 200\n\n    @app.route(\"/api/configuration/run_progress\", methods=[\"POST\"])\n    def configuration_run_progress():\n        message = request.json\n        # 获取正在运行的任务的进度\n        data = []\n        process_info = services.get_all_process_info()\n        for subpross_id, subpross  in process_info.items():\n            if subpross['status'] == 'running':\n                data.append({\n                    \"name\": f\"{subpross['task']}_pid_{subpross_id}\",\n                    \"progress\": str(subpross['progress']),\n                })\n        \n        return jsonify(data), 200\n\n    @app.route(\"/api/configuration/stop_progress\", methods=[\"POST\"])\n    def configuration_stop_progress():\n        message = request.json\n        task_name = message['name']\n        print(task_name)\n        pid = int(task_name.split('_')[-1])\n        services.terminate_task(pid)\n\n        return {\"succeed\": True}, 200\n\n\n    @app.route(\"/api/RunPage/get_info\", methods=[\"POST\"])\n    def run_page_get_info():\n        data = request.json\n        user_input = data.get(\"action\", \"\")\n\n        task_data = services.get_configuration()\n        # with open('transopt/agent/page_service_data/configuration_info.json', 'r') as file:\n        #     data = json.load(file)\n        return jsonify(task_data), 200\n\n\n    @app.route(\"/api/comparison/selections\", methods=[\"POST\"])\n    def comparison_send_selections():\n        info = request.json\n        # Comparison初始化时，请求可选择的搜索选项\n        data = services.get_comparision_modules()\n\n        return jsonify(data), 200\n\n\n    @app.route(\"/api/comparison/choose_task\", methods=[\"POST\"])\n    def comparison_choose_tasks():\n        conditions = request.json\n        ret = []\n        charts_data = {}\n        for condition in conditions:\n            ret.append(services.comparision_search(condition)) \n        \n\n        charts_data['BoxData'] = services.get_box_plot_data(ret)\n        charts_data['TrajectoryData'] = services.construct_statistic_trajectory_data(ret)\n        return jsonify(charts_data), 200\n\n    return app\n\ndef main():\n    app = create_app()\n    app.run(debug=app.config['DEBUG'], port=5001)\n\nif __name__ == \"__main__\":\n    main()"
  },
  {
    "path": "transopt/agent/chat/openai_chat.py",
    "content": "import json\nimport subprocess\nimport sys\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Union\n\nimport yaml\nfrom openai.types.chat.chat_completion import ChatCompletion\nfrom pydantic import BaseModel\n\nfrom transopt.agent.config import RunningConfig\nfrom transopt.agent.registry import *\nfrom transopt.benchmark.instantiate_problems import InstantiateProblems\nfrom transopt.datamanager.manager import DataManager\nfrom transopt.optimizer.construct_optimizer import ConstructOptimizer\nfrom transopt.utils.log import logger\n\n\ndef dict_to_string(dictionary):\n    return json.dumps(dictionary, ensure_ascii=False, indent=4)\n\n\nclass Message(BaseModel):\n    \"\"\"Model for LLM messages\"\"\"\n\n    role: str  # The role of the message author (system, user, assistant, or function).\n    content: Optional[Union[str, List[Dict]]] = None  # The message content.\n    tool_call_id: Optional[str] = None  # ID for the tool call response\n    name: Optional[str] = None  # Name of the tool or function, if applicable\n    metrics: Dict[str, Any] = {}  # Metrics for the message.\n    \n\n    def get_content_string(self) -> str:\n        \"\"\"Returns the content as a string.\"\"\"\n        if isinstance(self.content, str):\n            return self.content\n        if isinstance(self.content, list):\n            return json.dumps(self.content)\n        return \"\"\n\n    def to_dict(self) -> Dict[str, Any]:\n        _dict = self.model_dump(exclude_none=True, exclude={\"metrics\"})\n        # Manually add the content field if it is None\n        if self.content is None:\n            _dict[\"content\"] = None\n        return _dict\n\n    def log(self, level: Optional[str] = None):\n        \"\"\"Log the message to the console.\"\"\"\n        _logger = getattr(logger, level or \"debug\")\n        \n        _logger(f\"============== {self.role} ==============\")\n        message_detail = f\"Content: {self.get_content_string()}\"\n        if self.tool_call_id:\n            message_detail += f\", Tool Call ID: {self.tool_call_id}\"\n        if self.name:\n            message_detail += f\", Name: {self.name}\"\n        _logger(message_detail)\n\n\nclass OpenAIChat:\n    history: List[Message]\n\n    def __init__(\n        self,\n        api_key,\n        model=\"gpt-3.5-turbo\",\n        base_url=\"https://api.openai.com/v1\",\n        client_kwargs: Optional[Dict[str, Any]] = None,\n        data_manager: Optional[DataManager] = None,\n    ):\n        self.base_url = base_url\n        self.model = model\n        self.api_key = api_key \n        self.client_kwargs = client_kwargs or {}\n\n        self.prompt = self._get_prompt()\n        self.is_first_msg = True\n        \n        self.history = []\n\n        self.data_manager = DataManager() if data_manager is None else data_manager\n        self.running_config = RunningConfig()\n\n    def _get_prompt(self):\n        \"\"\"Reads a prompt from a file.\"\"\"\n        current_dir = Path(__file__).parent\n        file_path = current_dir / \"prompt\"\n        with open(file_path, \"r\") as file:\n            return file.read()\n        \n    @property\n    def client(self):\n        \"\"\"Lazy initialization of the OpenAI client.\"\"\"\n        from openai import OpenAI\n        return OpenAI(\n            api_key=self.api_key, base_url=self.base_url,\n            **self.client_kwargs\n        )\n\n    def invoke_model(self, messages: List[Dict]) -> ChatCompletion:\n        self.history.extend(messages)\n        \n        tools = [\n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"get_all_datasets\",\n                    \"description\": \"Show all available datasets in our system\",\n                    \"parameters\": {},\n                },\n            },\n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"get_dataset_info\",\n                    \"description\": \"Show detailed information of dataset according to the dataset name\",\n                    \"parameters\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"dataset_name\": {\n                                \"type\": \"string\",\n                                \"description\": \"The name of the dataset\",\n                            },\n                        },\n                        \"required\": [\"dataset_name\"],\n                    },\n                },\n            },\n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"get_all_problems\",\n                    \"description\": \"Show all optimization problems that our system supoorts\",\n                    \"parameters\": {},\n                },\n            },\n            \n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"get_optimization_techniques\",\n                    \"description\": \"Show all optimization techniques supported in  our system,\",\n                    \"parameters\": {},\n                },\n            },\n                        \n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"set_optimization_problem\",\n                    \"description\": \"Define or set an optimization problem based on user inputs for 'problem name', 'workload' and 'budget'.\",\n                    \"parameters\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"problem_name\": {\n                                \"type\": \"string\",\n                                \"description\": \"The name of the optimization problem\",\n                            },\n                            \"workload\": {\n                                \"type\": \"integer\",\n                                \"description\": \"The number of workload\",\n                            },\n                            \"budget\": {\n                                \"type\": \"integer\",\n                                \"description\": \"The number of budget to do function evaluations\",\n                            },\n                        },\n                        \"required\": [\"problem_name\", \"workload\", \"budget\"],\n                    },\n                },\n            },\n            \n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"set_model\",\n                    \"description\": \"Set the model used as surrogate model in the  Bayesian optimization, The input model name should be one of the available models.\",\n                    \"parameters\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"Model\": {\n                                \"type\": \"string\",\n                                \"description\": \"The model name\",\n                            },\n                        },\n                        \"required\": [\"Model\"],\n                    },\n                },\n            },\n            \n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"set_sampler\",\n                    \"description\": \"Set the sampler for the optimization process as user input. The input sampler name should be one of the available samplers.\",\n                    \"parameters\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"Sampler\": {\n                                \"type\": \"string\",\n                                \"description\": \"The name of Sampler\",\n                            },\n                        },\n                        \"required\": [\"Sampler\"],\n                    },\n                },\n            },\n            \n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"set_pretrain\",\n                    \"description\": \"Set the Pretrain methods. The input of users should include one of the available pretrain methods.\",\n                    \"parameters\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"Pretrain\": {\n                                \"type\": \"string\",\n                                \"description\": \"The name of Pretrain method\",\n                            },\n                        },\n                        \"required\": [\"Pretrain\"],\n                    },\n                },\n            },\n            \n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"set_normalizer\",\n                    \"description\": \"Set the normalization method to nomalize function evaluation and parameters. It requires one of the available normalization methods as input.\",\n                    \"parameters\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"Normalizer\": {\n                                \"type\": \"string\",\n                                \"description\": \"The name of Normalization method\",\n                            },\n                        },\n                        \"required\": [\"Normalizer\"],\n                    },\n                },\n            },\n            \n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"set_metadata\",\n                    \"description\": \"Set the metadata using a dataset stored in our system and specify a module to utilize this metadata.\",\n                    \"parameters\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"Normalizer\": {\n                                \"type\": \"string\",\n                                \"description\": \"The name of Normalization method\",\n                            },\n                        },\n                        \"required\": [\"module_name\", \"dataset_name\"],\n                    },\n                },\n            },\n            \n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"run_optimization\",\n                    \"description\": \"Set the normalization method to nomalize function evaluation and parameters. It requires one of the available normalization methods as input.\",\n                    \"parameters\": {},\n                },\n            },\n            \n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"show_configuration\",\n                    \"description\": \"Display all configurations set by the user so far, including the optimizer configuration, metadata configuration, and optimization problems\",\n                    \"parameters\": {},\n                },\n            },\n            \n            {\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": \"install_package\",\n                    \"description\": \"Install a Python package using pip\",\n                    \"parameters\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"package_name\": {\n                                \"type\": \"string\",\n                                \"description\": \"The name of the package to install\",\n                            },\n                        },\n                        \"required\": [\"package_name\"],\n                    },\n                },\n            },      \n        ]\n                \n        response = self.client.chat.completions.create(\n            model=self.model,\n            messages=messages,\n            tools=tools,\n            tool_choice=\"auto\",\n            temperature=0.1,\n        )\n        response_message = response.choices[0].message\n        tool_calls = response_message.tool_calls\n        # Process tool calls if there are any\n        if tool_calls:\n            self.history.append(response_message)\n            for tool_call in tool_calls:\n                function_name = tool_call.function.name\n                function_args = json.loads(tool_call.function.arguments)\n                function_response = self.call_manager_function(function_name, **function_args)\n                tool_message = {\n                    \"role\": \"tool\",\n                    \"tool_call_id\": tool_call.id,\n                    \"name\": function_name,\n                    \"content\": function_response\n                }\n                self.history.append(tool_message)\n                \n            # Refresh the model with the function response and get a new response\n            response = self.client.chat.completions.create(\n                model=self.model,\n                messages=self.history,\n            )\n        \n        self.history.append(response.choices[0].message) \n        logger.debug(f\"Response: {response.choices[0].message.content}\")\n        return response\n\n    def get_response(self, user_input) -> str:\n        logger.debug(\"---------- OpenAI Response Start ----------\")\n        user_message = {\"role\": \"user\", \"content\": user_input}\n        logger.debug(f\"User: {user_input}\")\n        messages = [user_message]\n\n        if self.is_first_msg:\n            system_message = {\"role\": \"system\", \"content\": self.prompt}\n            messages.insert(0, system_message)\n            self.is_first_msg = False\n        else:\n            system_message = {\"role\": \"system\", \"content\": \"Don't tell me which function to use, just call it. Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous\"}\n            messages.insert(0, system_message)\n            \n\n        response = self.invoke_model(messages)\n        logger.debug(f\"Assistant: {response.choices[0].message.content}\")\n        logger.debug(\"---------- OpenAI Response End ----------\")\n        return response.choices[0].message.content \n    \n    def call_manager_function(self, function_name, **kwargs):\n        available_functions = {\n            \"get_all_datasets\": self.data_manager.get_all_datasets,\n            \"get_all_problems\": self.get_all_problems,\n            \"get_optimization_techniques\": self.get_optimization_techniques,\n            \"get_dataset_info\": lambda: self.data_manager.get_dataset_info(kwargs['dataset_name']),\n            \"set_optimization_problem\": lambda: self.set_optimization_problem(kwargs['problem_name'], kwargs['workload'], kwargs['budget']),\n            'set_space_refiner': lambda: self.set_space_refiner(kwargs['refiner']),\n            'set_sampler': lambda: self.set_sampler(kwargs['Sampler']),\n            'set_pretrain': lambda: self.set_pretrain(kwargs['Pretrain']),\n            'set_model': lambda: self.set_model(kwargs['Model']),\n            'set_normalizer': lambda: self.set_normalizer(kwargs['Normalizer']),\n            'set_metadata': lambda: self.set_metadata(kwargs['module_name'], kwargs['dataset_name']),\n            'run_optimization': self.run_optimization,\n            'show_configuration': self.show_configuration,\n            \"install_package\": lambda: self.install_package(kwargs['package_name']),\n        }\n        function_to_call = available_functions[function_name]\n        return json.dumps({\"result\": function_to_call()})\n    \n    def _initialize_modules(self):\n        import transopt.benchmark.synthetic\n        # import transopt.benchmark.CPD\n        import transopt.optimizer.acquisition_function\n        import transopt.optimizer.model\n        import transopt.optimizer.pretrain\n        import transopt.optimizer.refiner\n        import transopt.optimizer.sampler\n\n    def get_all_problems(self):\n        tasks_info = []\n\n        # tasks information\n        task_names = problem_registry.list_names()\n        for name in task_names:\n            if problem_registry[name].problem_type == \"synthetic\":\n                num_obj = problem_registry[name].num_objectives\n                num_var = problem_registry[name].num_variables\n                task_info = {\n                    \"name\": name,\n                    \"problem_type\": \"synthetic\",\n                    \"anyDim\": \"True\",\n                    'num_vars': [],\n                    \"num_objs\": [1],\n                    \"workloads\": [],\n                    \"fidelity\": [],\n                }\n            else:\n                num_obj = problem_registry[name].num_objectives\n                num_var = problem_registry[name].num_variables\n                fidelity = problem_registry[name].fidelity\n                workloads = problem_registry[name].workloads\n                task_info = {\n                    \"name\": name,\n                    \"problem_type\": \"synthetic\",\n                    \"anyDim\": False,\n                    \"num_vars\": [num_var],\n                    \"num_objs\": [num_obj],\n                    \"workloads\": [workloads],\n                    \"fidelity\": [fidelity],\n                }\n            tasks_info.append(task_info)\n        return tasks_info\n    \n    def get_optimization_techniques(self):\n        basic_info = {}\n\n        selector_info = []\n        model_info = []\n        sampler_info = []\n        acf_info = []\n        pretrain_info = []\n        refiner_info = []\n        normalizer_info = []\n        \n        # tasks information\n        sampler_names = sampler_registry.list_names()\n        for name in sampler_names:\n            sampler_info.append(name)\n        basic_info[\"Sampler\"] = ','.join(sampler_info)\n\n        refiner_names = space_refiner_registry.list_names()\n        for name in refiner_names:\n            refiner_info.append(name)\n        basic_info[\"SpaceRefiner\"] = ','.join(refiner_info)\n\n        pretrain_names = pretrain_registry.list_names()\n        for name in pretrain_names:\n            pretrain_info.append(name)\n        basic_info[\"Pretrain\"] = ','.join(pretrain_info)\n\n        model_names = model_registry.list_names()\n        for name in model_names:\n            model_info.append(name)\n        basic_info[\"Model\"] = ','.join(model_info)\n\n        acf_names = acf_registry.list_names()\n        for name in acf_names:\n            acf_info.append(name)\n        basic_info[\"ACF\"] = ','.join(acf_info)\n\n        selector_names = selector_registry.list_names()\n        for name in selector_names:\n            selector_info.append(name)\n        basic_info[\"DataSelector\"] = ','.join(selector_info)\n        \n        normalizer_names = selector_registry.list_names()\n        for name in normalizer_names:\n            normalizer_info.append(name)\n        basic_info[\"Normalizer\"] = ','.join(normalizer_info)\n        \n        \n        return basic_info\n    \n    def set_optimization_problem(self, problem_name, workload, budget):        \n        problem_info = {}\n        if problem_name in problem_registry:\n            problem_info[problem_name] = {\n                'budget': budget,\n                'workload': workload,\n                'budget_type': 'Num_FEs',\n                \"params\": {},\n            }\n\n        self.running_config.set_tasks(problem_info)\n        return \"Succeed\"\n    \n    def set_space_refiner(self, refiner):\n        self.running_config.optimizer['SpaceRefiner'] = refiner\n        return f\"Succeed to set the space refiner {refiner}\"\n\n    def set_sampler(self, Sampler):\n        self.running_config.optimizer['Sampler'] = Sampler\n        return f\"Succeed to set the sampler {Sampler}\"\n    \n    \n    def set_pretrain(self, Pretrain):\n        self.running_config.optimizer['Pretrain'] = Pretrain\n        return f\"Succeed to set the pretrain {Pretrain}\"\n    \n    def set_model(self, Model):\n        self.running_config.optimizer['Model'] = Model\n        return f\"Succeed to set the model {Model}\"\n    \n    def set_normalizer(self, Normalizer):\n        self.running_config.optimizer['Normalizer'] = Normalizer\n        return f\"Succeed to set the normalizer {Normalizer}\"\n    \n    def set_metadata(self, module_name, dataset_name):\n        self.running_config.metadata[module_name] = dataset_name\n        return f\"Succeed to set the metadata {dataset_name} for {module_name}\"\n    \n    def run_optimization(self):\n        task_set = InstantiateProblems(self.running_config.tasks, 0)\n        optimizer = ConstructOptimizer(self.running_config.optimizer, 0)\n        \n        try:\n            while (task_set.get_unsolved_num()):\n                iteration = 0\n                search_space = task_set.get_cur_searchspace()\n                dataset_info, dataset_name = self.construct_dataset_info(task_set, self.running_config, seed=0)\n                \n                self.data_manager.db.create_table(dataset_name, dataset_info, overwrite=True)\n                optimizer.link_task(task_name=task_set.get_curname(), search_sapce=search_space)\n                \n                metadata, metadata_info = self.get_metadata('SpaceRefiner')\n                optimizer.search_space_refine(metadata, metadata_info)\n                \n                metadata, metadata_info = self.get_metadata('Sampler')\n                samples = optimizer.sample_initial_set(metadata, metadata_info)\n                \n                parameters = [search_space.map_to_design_space(sample) for sample in samples]\n                observations = task_set.f(parameters)\n                self.save_data(dataset_name, parameters, observations, iteration)\n                \n                optimizer.observe(samples, observations)\n                \n                #Pretrain\n                metadata, metadata_info = self.get_metadata('Model')\n                optimizer.meta_fit(metadata, metadata_info)\n        \n                while (task_set.get_rest_budget()):\n                    optimizer.fit()\n                    suggested_samples = optimizer.suggest()\n                    parameters = [search_space.map_to_design_space(sample) for sample in suggested_samples]\n                    observations = task_set.f(parameters)\n                    self.save_data(dataset_name, parameters, observations, iteration)\n                    \n                    optimizer.observe(suggested_samples, observations)\n                    iteration += 1\n                    \n                    print(\"Seed: \", 0, \"Task: \", task_set.get_curname(), \"Iteration: \", iteration)\n                    # if self.verbose:\n                    #     self.visualization(testsuits, suggested_sample)\n                task_set.roll()\n        except Exception as e:\n            raise e\n    def show_configuration(self):\n        conf = {'Optimization problem': self.running_config.tasks, 'Optimizer': self.running_config.optimizer, 'Metadata': self.running_config.metadata}\n        return dict_to_string(conf)\n    \n    def install_package(self, package_name: str) -> str:\n        \"\"\"Install a Python package using pip.\"\"\"\n        try:\n            subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", package_name])\n            return f\"Package '{package_name}' installed successfully.\"\n        except subprocess.CalledProcessError as e:\n            logger.error(f\"Failed to install package '{package_name}': {e}\")\n            return f\"Failed to install package '{package_name}'. Error: {str(e)}\""
  },
  {
    "path": "transopt/agent/chat/prompt",
    "content": "\n\nYou are an agent of the \"Transfer Optimization System,\" designed to solve optimization problems. The system can solve optimization problems with transfer learning for optimization techniques. \nYour primary roles are:\n1.Display system information, including datasets stored within the system, available optimization modules and methods, and the optimization problems that the system can address.\n2.Assist users in configuring and launching optimization problems using suitable methods. \n\n\nPlease inform users that in order to utilize our system, they are required to complete a four-step configuration process:\n\n1.Define the optimization problem, ensuring to specify both the workload and the budget.\n2.Configure the optimization method, which includes the following five modules: search space refiner, sampler, pre-train method, model, acquisition function, and normalizer. Use default methods for any modules not explicitly configured. Each module can be individually set with a specific method.\n3.Choose the metadata by selecting one or more datasets already available in the system.\n4.Run the optimization process.\"\n\n<!-- \nExample 1:\nInput:I want to tune the parameters of a deep neural network to improve its prediction accuracy. How should I use TOS to do this. -->\n\n\n\n\n\n"
  },
  {
    "path": "transopt/agent/chat/prompt.bak",
    "content": "Please transform my optimization group description into a JSON format according to the following template. Ensure the description adheres to the structure outlined below, including all necessary and optional fields:\n{\n  \"group_id\": \"Specify group ID, generated automatically if unspecified, starting from 1\",\n  \"group_type\": \"Specify 'Sequential' or 'Parallel'\",\n  \"tasks\": [\n    {\n      \"task_name\": \"Choose from 'HPOXGBoost', 'HPOSVM', 'HPORes18'\",\n      \"variables\": {\n        \"variable_name\": {\"type\": \"Specify type, e.g., 'categorical', 'integer', 'continuous'\", \"range or choices accroding to type\": [Specify range or choices]}\n      },\n      \"objectives\": {\n        \"objective_name\": {\"type\": \"Specify 'minimize' or 'maximize'\"}\n      },\n      \"fidelities\": {\n        \"fidelity_name\": {\"type\": \"Specify type, e.g., 'categorical', 'integer', 'continuous'\", \"range or choices according to type\": [Specify range or choices], \"default\": \"Specify default value\"}\n      },\n      \"workloads\": \"Mandatory, specify the name of the workloads\",\n      \"budget\": \"Mandatory, specify the budget\"\n    }\n  ],\n  \"algorithm\": {\n    \"name\": \"Specify algorithm name, default if unspecified is 'BO'\",\n    \"parameters\": {\n      \"parameter_name\": \"Specify parameter value, e.g., 'max_iter': 100\"\n    }\n  },\n  \"auxiliary\": {\n    \"selection_criteria\": \"Optional, specify criteria\",\n    \"using_stage\": \"Optional, specify stage\",\n  }\n}\nThe above JSON not only specifies the format, but also the names of each field and the requirements for the values. The main requirement for values should not be directly filled into the generated content.\n\n\nRequirements:\nGroup ID: Automatically generated, starting from 1.\nGroup Type: Mandatory. Indicate whether tasks are to be executed in a 'Sequential' or 'Parallel' manner.\nTasks: Mandatory. List each task, including mandatory fields like Task Name, Workloads, and Budget, and optional fields like Variables, Objectives, Fidelities. Optional fields' values should be {} if unspecified. \nAlgorithm: Optional. Specify the algorithm name and any parameters. If unspecified, 'BO' will be used as the default algorithm.\nAuxiliary: Optional. Include any additional information if necessary.\n\nOutput:\nThe output should only have two possibilities: \n2. If the description omits any mandatory fields  or the provided details contain inconsistencies, you should give an error message indicating the missing or incorrect information and do not generate the JSON structure. For example, if budget not specified, you should just give an error message like \"Budget is missing, ....\" and not generate the JSON structure."
  },
  {
    "path": "transopt/agent/chat/yaml_generator.py",
    "content": "from pathlib import Path\nfrom typing import Any, Dict\n\nimport yaml\nfrom transopt.utils.log import logger\nfrom agent.chat.openai_chat import Message, OpenAIChat\n\n\ndef get_prompt(file_name: str) -> str:\n    \"\"\"Reads a prompt from a file.\"\"\"\n    current_dir = Path(__file__).parent\n    file_path = current_dir / file_name\n    \n    with open(file_path, 'r') as file:\n        prompt = file.read()\n    return prompt\n\n\ndef parse_response(response: str) -> Dict[str, Any]:\n    \"\"\"Parses a string response into a structured Python dictionary.\"\"\"\n    try:\n        structured_info = yaml.safe_load(response)\n    except yaml.YAMLError as e:\n        logger.error(f\"Error parsing response into Python dict: {e}\")\n        structured_info = {}\n    return structured_info\n\n\ndef main():\n    # Assuming OpenAIChat and Message are defined elsewhere and imported correctly\n    openai_chat = OpenAIChat()\n    \n    print(\"Welcome to the YAML Generator!\")\n    user_input = input(\"\\nPlease describe the configuration you'd like to convert to YAML:\\n\")\n    \n    # Process the input using the OpenAI API\n    prompt = get_prompt(\"prompt\")  # Assuming the prompt file is named 'prompt.yml'\n    system_message = Message(role=\"system\", content=prompt)\n    user_message = Message(role=\"user\", content=user_input)\n    response_content = openai_chat.get_response([system_message, user_message])\n    \n    print(\"\\nAssistant's Response:\\n\")\n    print(response_content)\n\n    while True:\n        refine = input(\"\\nPlease refine your configuration or type 'exit' to quit:\\n\")\n        if refine.lower() == 'exit':\n            print(\"Thank you for using the YAML Generator!\")\n            break\n        \n        user_message = Message(role=\"user\", content=refine)\n        response_content = openai_chat.get_response([user_message])\n        \n        print(\"\\nAssistant's Response:\\n\")\n        print(response_content)\n\n\nif __name__ == \"__main__\":\n    main()"
  },
  {
    "path": "transopt/agent/config.py",
    "content": "\nclass Config:\n    DEBUG = True\n    OPENAI_API_KEY = \"sk-1XGNThXZQVYh6EI25b44Bb74940d4eEdBdDa81723e00C794\"\n    OPENAI_URL = \"https://aihubmix.com/v1\"\n\n\n\n\nclass RunningConfig:\n    _instance = None\n    _init = False  # 用于保证初始化代码只运行一次\n\n    def __new__(cls, *args, **kwargs):\n        if cls._instance is None:\n            cls._instance = super(RunningConfig, cls).__new__(cls)\n            cls._instance._initialized = False\n        return cls._instance\n    \n    \n    def __init__(self):\n        self.tasks = None\n        self.optimizer = {'SpaceRefiner':None, 'Sampler':None, 'ACF':None, 'Pretrain':None, 'Model':None, 'Normalizer':None}\n        self.metadata = {'SpaceRefiner':[], 'Sampler':[], 'ACF':[], 'Pretrain':[], 'Model':[], 'Normalizer':[]}\n        \n        \n    def set_tasks(self, tasks):\n        self.tasks = tasks\n        \n    def set_optimizer(self, optimizer):\n        self.optimizer = optimizer\n        if 'SamplerInitNum' not in self.optimizer:\n            self.optimizer['SamplerInitNum'] = 11\n        self.optimizer['SamplerInitNum'] =  int(self.optimizer['SamplerInitNum'])\n\n    def set_metadata(self, metadata):\n        self.metadata[metadata['object']] = metadata['datasets']\n\n    "
  },
  {
    "path": "transopt/agent/registry.py",
    "content": "class Registry:\n    def __init__(self):\n        self._registry = {}\n\n    def register(self, name=None, cls=None, **kwargs):\n        if cls is None:\n            def wrapper(cls):\n                return self.register(name, cls, **kwargs)\n            return wrapper\n        \n        if name is None:\n            name = cls.__name__\n\n        if name in self._registry:\n            raise ValueError(f\"Error: '{name}' is already registered.\")\n        \n        self._registry[name] = {'cls': cls, **kwargs}\n        return cls\n\n    def get(self, name):\n        return self._registry[name]['cls']\n\n    def list_names(self):\n        return list(self._registry.keys())\n\n    def __getitem__(self, item):\n        return self.get(item)\n\n    def __contains__(self, item):\n        return item in self._registry\n\nspace_refiner_registry = Registry()\nsampler_registry = Registry()\npretrain_registry = Registry()\nmodel_registry = Registry()\nacf_registry = Registry()\nproblem_registry = Registry()\nstatistic_registry = Registry()\nselector_registry = Registry()\nnormalizer_registry = Registry()\n"
  },
  {
    "path": "transopt/agent/run_cli.py",
    "content": "import os\nimport traceback\nimport argparse\nfrom services import Services\n\nos.environ[\"MKL_NUM_THREADS\"] = \"1\"\nos.environ[\"NUMEXPR_NUM_THREADS\"] = \"1\"\nos.environ[\"OMP_NUM_THREADS\"] = \"1\"\n\n\ndef set_task(services, args):\n    task_info = [{\n        \"name\": args.task_name,\n        \"num_vars\": args.num_vars,\n        \"num_objs\": args.num_objs,\n        \"fidelity\": args.fidelity,\n        \"workloads\": args.workloads,\n        \"budget_type\": args.budget_type,\n        \"budget\": args.budget,\n    }]\n    services.receive_tasks(task_info)\n\n\ndef set_optimizer(services, args):\n    optimizer_info = {\n        \"SpaceRefiner\": args.space_refiner,\n        \"SpaceRefinerParameters\": args.space_refiner_parameters,\n        \"SpaceRefinerDataSelector\": args.space_refiner_data_selector,\n        \"SpaceRefinerDataSelectorParameters\": args.space_refiner_data_selector_parameters,\n        \"Sampler\": args.sampler,\n        \"SamplerInitNum\": args.sampler_init_num,\n        \"SamplerParameters\": args.sampler_parameters,\n        \"SamplerDataSelector\": args.sampler_data_selector,\n        \"SamplerDataSelectorParameters\": args.sampler_data_selector_parameters,\n        \"Pretrain\": args.pre_train,\n        \"PretrainParameters\": args.pre_train_parameters,\n        \"PretrainDataSelector\": args.pre_train_data_selector,\n        \"PretrainDataSelectorParameters\": args.pre_train_data_selector_parameters,\n        \"Model\": args.model,\n        \"ModelParameters\": args.model_parameters,\n        \"ModelDataSelector\": args.model_data_selector,\n        \"ModelDataSelectorParameters\": args.model_data_selector_parameters,\n        \"ACF\": args.acquisition_function,\n        \"ACFParameters\": args.acquisition_function_parameters,\n        \"ACFDataSelector\": args.acquisition_function_data_selector,\n        \"ACFDataSelectorParameters\": args.acquisition_function_data_selector_parameters,\n        \"Normalizer\": args.normalizer,\n        \"NormalizerParameters\": args.normalizer_parameters,\n        \"NormalizerDataSelector\": args.normalizer_data_selector,\n        \"NormalizerDataSelectorParameters\": args.normalizer_data_selector_parameters,\n    }\n    services.receive_optimizer(optimizer_info)\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    # Task\n    parser.add_argument(\"-n\", \"--task_name\", type=str, default=\"MixupOOD\")\n    parser.add_argument(\"-v\", \"--num_vars\", type=int, default=2)\n    parser.add_argument(\"-o\", \"--num_objs\", type=int, default=1)\n    parser.add_argument(\"-f\", \"--fidelity\", type=str, default=\"\")\n    parser.add_argument(\"-w\", \"--workloads\", type=str, default=\"0\")\n    parser.add_argument(\"-bt\", \"--budget_type\", type=str, default=\"Num_FEs\")\n    parser.add_argument(\"-b\", \"--budget\", type=int, default=100)\n    # Optimizer\n    parser.add_argument(\"-sr\", \"--space_refiner\", type=str, default=\"None\")\n    parser.add_argument(\"-srp\", \"--space_refiner_parameters\", type=str, default=\"\")\n    parser.add_argument(\"-srd\", \"--space_refiner_data_selector\", type=str, default=\"None\")\n    parser.add_argument(\"-srdp\", \"--space_refiner_data_selector_parameters\", type=str, default=\"\")\n    parser.add_argument(\"-sp\", \"--sampler\", type=str, default=\"random\")\n    parser.add_argument(\"-spi\", \"--sampler_init_num\", type=int, default=22)\n    parser.add_argument(\"-spp\", \"--sampler_parameters\", type=str, default=\"\")\n    parser.add_argument(\"-spd\", \"--sampler_data_selector\", type=str, default=\"None\")\n    parser.add_argument(\"-spdp\", \"--sampler_data_selector_parameters\", type=str, default=\"\")\n    parser.add_argument(\"-pt\", \"--pre_train\", type=str, default=\"None\")\n    parser.add_argument(\"-ptp\", \"--pre_train_parameters\", type=str, default=\"\")\n    parser.add_argument(\"-ptd\", \"--pre_train_data_selector\", type=str, default=\"None\")\n    parser.add_argument(\"-ptdp\", \"--pre_train_data_selector_parameters\", type=str, default=\"\")\n    parser.add_argument(\"-m\", \"--model\", type=str, default=\"GP\")\n    parser.add_argument(\"-mp\", \"--model_parameters\", type=str, default=\"\")\n    parser.add_argument(\"-md\", \"--model_data_selector\", type=str, default=\"None\")\n    parser.add_argument(\"-mdp\", \"--model_data_selector_parameters\", type=str, default=\"\")\n    parser.add_argument(\"-acf\", \"--acquisition_function\", type=str, default=\"EI\")\n    parser.add_argument(\"-acfp\", \"--acquisition_function_parameters\", type=str, default=\"\")\n    parser.add_argument(\"-acfd\", \"--acquisition_function_data_selector\", type=str, default=\"None\")\n    parser.add_argument(\"-acfdp\", \"--acquisition_function_data_selector_parameters\", type=str, default=\"\")\n    parser.add_argument(\"-norm\", \"--normalizer\", type=str, default=\"Standard\")\n    parser.add_argument(\"-normp\", \"--normalizer_parameters\", type=str, default=\"\")\n    parser.add_argument(\"-normd\", \"--normalizer_data_selector\", type=str, default=\"None\")\n    parser.add_argument(\"-normdp\", \"--normalizer_data_selector_parameters\", type=str, default=\"\")\n    # Seed\n    parser.add_argument(\"-s\", \"--seeds\", type=int, default=0)\n    # parser.add_argument(\"-s\", \"--seeds\", type=str, default=\"5\")\n\n\n    args = parser.parse_args()\n    services = Services(None, None, None)\n    services._initialize_modules()\n    set_task(services, args)\n    set_optimizer(services, args)\n    try:\n        services._run_optimize_process(seed = args.seeds)\n    except Exception as e:\n        traceback.print_exc()\n"
  },
  {
    "path": "transopt/agent/services.py",
    "content": "import os\nimport signal\nimport time\nfrom multiprocessing import Manager, Process\n\nimport numpy as np\n\nfrom transopt.agent.chat.openai_chat import OpenAIChat\nfrom transopt.agent.config import Config, RunningConfig\nfrom transopt.agent.registry import *\nfrom transopt.analysis.parameter_network import plot_network\nfrom transopt.benchmark.instantiate_problems import InstantiateProblems\nfrom transopt.datamanager.manager import Database, DataManager\nfrom transopt.optimizer.construct_optimizer import (ConstructOptimizer,\n                                                    ConstructSelector)\n\nfrom transopt.utils.log import logger\nfrom transopt.analysis.mds import FootPrint\n\nclass Services:\n    def __init__(self, task_queue, result_queue, lock):\n        self.config = Config()\n        self.running_config = RunningConfig()\n        \n        # DataManager for general tasks, not specific optimization tasks\n        self.data_manager = DataManager()\n        self.tasks_info = []\n\n        self.openai_chat = OpenAIChat(\n            api_key=self.config.OPENAI_API_KEY,\n            model=\"gpt-3.5-turbo\",\n            base_url=self.config.OPENAI_URL,\n            data_manager= self.data_manager\n        )\n\n        self._initialize_modules()\n        self.process_info = Manager().dict()\n        self.lock = Manager().Lock()\n\n    def chat(self, user_input):\n        response_content = self.openai_chat.get_response(user_input)\n        return response_content\n\n    def _initialize_modules(self):\n        # import transopt.benchmark.CPD\n        # import transopt.benchmark.CPD\n        import transopt.benchmark.HPOOOD\n        import transopt.benchmark.HPO\n        import transopt.benchmark.synthetic\n        import transopt.benchmark.CSSTuning\n        import transopt.optimizer.acquisition_function\n        import transopt.optimizer.model\n        import transopt.optimizer.normalizer\n        import transopt.optimizer.pretrain\n        import transopt.optimizer.refiner\n        import transopt.optimizer.sampler\n        import transopt.optimizer.selector\n        \n        \n    def get_modules(self):\n        basic_info = {}\n        tasks_info = []\n        selector_info = []\n        model_info = []\n        sampler_info = []\n        acf_info = []\n        pretrain_info = [{'name':'None'}]\n        refiner_info = [{'name':'None'}]\n        normalizer_info = [{'name':'None'}]\n\n        # tasks information\n        task_names = problem_registry.list_names()\n        for name in task_names:\n            if problem_registry[name].problem_type == \"synthetic\":\n                num_obj = problem_registry[name].num_objectives\n                num_var = problem_registry[name].num_variables\n                task_info = {\n                    \"name\": name,\n                    \"problem_type\": \"synthetic\",\n                    \"anyDim\": \"True\",\n                    'num_vars': [],\n                    \"num_objs\": [1],\n                    \"workloads\": [],\n                    \"fidelity\": [],\n                }\n            else:\n                num_obj = problem_registry[name].num_objectives\n                num_var = problem_registry[name].num_variables\n                fidelity = problem_registry[name].fidelity\n                workloads = problem_registry[name].workloads\n                problem_type = problem_registry[name].problem_type\n                task_info = {\n                    \"name\": name,\n                    \"problem_type\": problem_type,\n                    \"anyDim\": False,\n                    \"num_vars\": [num_var],\n                    \"num_objs\": [num_obj],\n                    \"workloads\": workloads,\n                    \"fidelity\": [fidelity],\n                }\n            tasks_info.append(task_info)\n        basic_info[\"TasksData\"] = tasks_info\n\n        sampler_names = sampler_registry.list_names()\n        for name in sampler_names:\n            sampler_info.append({\"name\": name})\n        basic_info[\"Sampler\"] = sampler_info\n\n        refiner_names = space_refiner_registry.list_names()\n        for name in refiner_names:\n            refiner_info.append({\"name\": name})\n        basic_info[\"SpaceRefiner\"] = refiner_info\n\n        pretrain_names = pretrain_registry.list_names()\n        for name in pretrain_names:\n            pretrain_info.append({\"name\": name})\n        basic_info[\"Pretrain\"] = pretrain_info\n\n        model_names = model_registry.list_names()\n        for name in model_names:\n            model_info.append({\"name\": name})\n        basic_info[\"Model\"] = model_info\n\n        acf_names = acf_registry.list_names()\n        for name in acf_names:\n            acf_info.append({\"name\": name})\n        basic_info[\"ACF\"] = acf_info\n\n        selector_names = selector_registry.list_names()\n        for name in selector_names:\n            selector_info.append({\"name\": name})\n        \n        basic_info[\"DataSelector\"] = selector_info\n        \n        normalizer_names = normalizer_registry.list_names()\n        for name in normalizer_names:\n            normalizer_info.append({\"name\": name})\n        basic_info[\"Normalizer\"] = normalizer_info\n\n        return basic_info\n    \n    \n    def get_comparision_modules(self):\n        module_info = {}\n        model_info = []\n        sampler_info = []\n        acf_info = []\n        pretrain_info = ['None']\n        refiner_info = ['None']\n        normalizer_info = ['None']\n\n        sampler_names = sampler_registry.list_names()\n        for name in sampler_names:\n            sampler_info.append(name)\n        module_info[\"Sampler\"] = sampler_info\n\n        refiner_names = space_refiner_registry.list_names()\n        for name in refiner_names:\n            refiner_info.append(name)\n        module_info[\"Refiner\"] = refiner_info\n\n        pretrain_names = pretrain_registry.list_names()\n        for name in pretrain_names:\n            pretrain_info.append(name)\n        module_info[\"Pretrain\"] = pretrain_info\n\n        model_names = model_registry.list_names()\n        for name in model_names:\n            model_info.append(name)\n        module_info[\"Model\"] = model_info\n\n        acf_names = acf_registry.list_names()\n        for name in acf_names:\n            acf_info.append(name)\n        module_info[\"ACF\"] = acf_info\n        \n        normalizer_names = normalizer_registry.list_names()\n        for name in normalizer_names:\n            normalizer_info.append(name)\n        module_info[\"Normalizer\"] = normalizer_info\n\n        return module_info\n\n    def search_dataset(self, search_method, dataset_name, dataset_info):\n        if search_method == 'Fuzzy':\n            datasets_list = {\"isExact\": False, \n                             \"datasets\": list(self.data_manager.search_datasets_by_name(dataset_name))}\n        elif search_method == 'Hash':\n            dataset_detail_info = self.data_manager.get_dataset_info(dataset_name)\n            if dataset_detail_info:\n                datasets_list = {\"isExact\": True, \"datasets\": dataset_detail_info['additional_config']}\n            else:\n                raise ValueError(\"Dataset not found\")\n        elif search_method == 'LSH':\n            datasets_list = {\"isExact\": False, \n                             \"datasets\":list(self.data_manager.search_similar_datasets(dataset_name, dataset_info))}\n            \n        else:\n            raise ValueError(\"Invalid search method\")\n\n        return datasets_list\n   \n    def convert_metadata(self, conditions):\n        type_map = {\n            \"NumVars\": int,\n            \"NumObjs\": int,\n            \"Workload\": int,\n            \"Seed\": int,\n            # Add other fields as necessary\n        }\n        converted_conditions = {}\n        for key, value in conditions.items():\n            if key in type_map:\n                try:\n                    # Convert the value according to its expected type\n                    if type_map[key] == int:\n                        converted_conditions[key] = int(value)\n                    elif type_map[key] == float:\n                        converted_conditions[key] = float(value)\n                    elif type_map[key] == bool:\n                        converted_conditions[key] = value.lower() in ['true', '1', 't', 'yes', 'y']\n                    else:\n                        converted_conditions[key] = value  # Assume string or no conversion needed\n                except ValueError:\n                    raise ValueError(f\"Invalid value for {key}: {value}\")\n            else:\n                # If no specific type is expected, assume string\n                converted_conditions[key] = value\n\n        return converted_conditions\n \n    def comparision_search(self, conditions):\n        conditions = {k: v for k, v in conditions.items() if v}\n        conditions = self.convert_metadata(conditions)\n        \n        key_map = {\n            \"TaskName\": \"problem_name\",\n            \"NumVars\": \"dimensions\",\n            \"NumObjs\": \"objectives\",\n            \"Fidelity\": \"fidelities\",\n            \"Workload\": \"workloads\",\n            \"Seed\": \"seeds\",\n            \"Refiner\": \"space_refiner\",\n            \"Sampler\": \"sampler\",\n            \"Pretrain\": \"pretrain\",\n            \"Model\": \"model\",\n            \"ACF\": \"acf\",\n            \"Normalizer\": \"normalizer\"\n        }\n        \n        # change key in conditions to match the key in database\n        conditions = {key_map[k]: v for k, v in conditions.items() if k in key_map}\n        \n        return self.data_manager.db.search_tables_by_metadata(conditions)\n    \n    def set_metadata(self, dataset_names):\n        self.running_config.set_metadata(dataset_names)\n        pass\n\n    def receive_tasks(self, tasks_info):\n        tasks = {}\n        self.tasks_info = tasks_info\n        workloads = []\n        for task in tasks_info:\n            for item in task[\"workloads\"].split(\",\"):\n                try:\n                    workloads.append(int(item))\n                except:\n                    workloads.append(item)\n            tasks[task[\"name\"]] = {\n                \"budget_type\": task[\"budget_type\"],\n                \"budget\": int(task[\"budget\"]),\n                \"workloads\": workloads,\n                \"params\": {\"input_dim\": int(task[\"num_vars\"])},\n            }\n\n        self.running_config.set_tasks(tasks)\n        return\n\n    def receive_optimizer(self, optimizer_info):\n\n        self.running_config.set_optimizer(optimizer_info)\n        return\n\n    def receive_metadata(self, metadata_info):\n        print(metadata_info)\n\n        self.running_config.set_metadata(metadata_info)\n        return\n\n    def get_all_datasets(self):\n        all_tables = self.data_manager.db.get_table_list()\n        return [self.data_manager.db.query_dataset_info(table) for table in all_tables]\n    \n    def get_experiment_datasets(self):\n        experiment_tables = self.data_manager.db.get_table_list()\n        return [(experiment_tables[table_id],self.data_manager.db.query_dataset_info(table)) for table_id, table in enumerate(experiment_tables)] \n   \n    def construct_dataset_info(self, task_set, running_config, seed):\n        dataset_info = {}\n        dataset_info[\"variables\"] = [\n            {\"name\": var.name, \"type\": var.type, \"range\": var.range}\n            for var_name, var in task_set.get_cur_searchspace_info().items()\n        ]\n        dataset_info[\"objectives\"] = [\n            {\"name\": name, \"type\": type}\n            for name, type in task_set.get_curobj_info().items()\n        ]\n        dataset_info[\"fidelities\"] = [\n            {\"name\": var.name, \"type\": var.type, \"range\": var.range}\n            for var_name, var in task_set.get_cur_fidelity_info().items()\n        ]\n\n        # Simplify dataset name construction\n        timestamp = int(time.time())\n        dataset_name = f\"{task_set.get_curname()}_w{task_set.get_cur_workload()}_s{seed}_{timestamp}\"\n\n        dataset_info['additional_config'] = {\n            \"problem_name\": task_set.get_curname(),\n            \"dim\": len(dataset_info[\"variables\"]),\n            \"obj\": len(dataset_info[\"objectives\"]),\n            \"fidelity\": ', '.join([d['name'] for d in dataset_info[\"fidelities\"] if 'name' in d]) if dataset_info[\"fidelities\"] else '',\n            \"workloads\": task_set.get_cur_workload(),\n            \"budget_type\": task_set.get_cur_budgettype(),\n            \"initial_number\": running_config.optimizer['SamplerInitNum'],\n            \"budget\": task_set.get_cur_budget(),\n            \"seeds\": seed,\n            \"SpaceRefiner\": running_config.optimizer['SpaceRefiner'],\n            \"Sampler\": running_config.optimizer['Sampler'],\n            \"Pretrain\": running_config.optimizer['Pretrain'],\n            \"Model\": running_config.optimizer['Model'],\n            \"ACF\": running_config.optimizer['ACF'],\n            \"Normalizer\": running_config.optimizer['Normalizer'],\n            \"DatasetSelector\": f\"SpaceRefiner-{running_config.optimizer['SpaceRefinerDataSelector']}, \\\n                Sampler - {running_config.optimizer['SamplerDataSelector']}, \\\n                Pretrain - {running_config.optimizer['PretrainDataSelector']}, \\\n                Model - {running_config.optimizer['ModelDataSelector']}, \\\n                ACF-{running_config.optimizer['ACFDataSelector']}, \\\n                Normalizer - {running_config.optimizer['NormalizerDataSelector']}\",\n            \"metadata\": running_config.metadata if running_config.metadata else [],\n        }\n\n        return dataset_info, dataset_name\n \n    def get_metadata(self, module_name):\n        if len(self.running_config.metadata[module_name]):\n            metadata = {}\n            metadata_info = {}\n            for dataset_name in self.running_config.metadata[module_name]:\n                metadata[dataset_name] = self.data_manager.db.select_data(dataset_name)\n                metadata_info[dataset_name] = self.data_manager.db.query_dataset_info(dataset_name)\n            return metadata, metadata_info\n        else:\n            return {}, {}\n    \n    def save_data(self, dataset_name, parameters, observations, iteration):\n        data = [{} for i in range(len(parameters))]\n        [data[i].update(parameters[i]) for i in range(len(parameters))]\n        [data[i].update(observations[i]) for i in range(len(parameters))]\n        [data[i].update({'batch':iteration}) for i in range(len(parameters))]\n        self.data_manager.db.insert_data(dataset_name, data)\n    \n    def remove_dataset(self, dataset_name):\n        if isinstance(dataset_name, str):\n            self.data_manager.db.remove_table(dataset_name)\n        elif isinstance(dataset_name, list):\n            for name in dataset_name:\n                self.data_manager.db.remove_table(name)\n        else:\n            raise ValueError(\"Invalid dataset name\")\n\n    def run_optimize(self, seeds):\n        # Create a separate process for each seed\n        process_list = []\n        for seed in seeds:\n            p = Process(target=self._run_optimize_process, args=(int(seed),))\n            process_list.append(p)\n            p.start()\n        \n        for p in process_list:\n            p.join()\n            \n    def _run_optimize_process(self, seed):\n        # Each process constructs its own DataManager\n        try:\n            import os\n            pid = os.getpid()\n            self.process_info[pid] = {'status': 'running', 'seed': seed, 'budget': None, 'task': None, 'iteration': 0, 'dataset_name': None, 'progress':0}\n            logger.info(f\"Start process #{pid}\")\n\n            # Instantiate problems and optimizer\n            task_set = InstantiateProblems(self.running_config.tasks, seed)\n            optimizer = ConstructOptimizer(self.running_config.optimizer, seed)\n            dataselector = ConstructSelector(self.running_config.optimizer, seed)\n\n            while (task_set.get_unsolved_num()):\n                search_space = task_set.get_cur_searchspace()\n                dataset_info, dataset_name = self.construct_dataset_info(task_set, self.running_config, seed=seed)\n                \n                self.data_manager.create_dataset(dataset_name, dataset_info, overwrite=True)\n                self.update_process_info(pid, {'dataset_name': dataset_name, 'task': task_set.get_curname(), 'budget': task_set.get_cur_budget()})\n\n                optimizer.link_task(task_name=task_set.get_curname(), search_space=search_space)\n                    \n                metadata, metadata_info = self.get_metadata('SpaceRefiner')\n                if dataselector['SpaceRefinerDataSelector']:\n                    metadata, metadata_info = dataselector['SpaceRefinerDataSelector'].fetch_data(dataset_info)\n                optimizer.search_space_refine(metadata, metadata_info)\n                    \n                metadata, metadata_info = self.get_metadata('Sampler')\n                if dataselector['SamplerDataSelector']:\n                    metadata, metadata_info = dataselector['SamplerDataSelector'].fetch_data(dataset_info)\n                samples = optimizer.sample_initial_set(metadata, metadata_info)\n                \n                \n                parameters = [search_space.map_to_design_space(sample) for sample in samples]\n                observations = task_set.f(parameters)\n                self.save_data(dataset_name, parameters, observations, self.process_info[pid]['iteration'])\n                    \n                optimizer.observe(samples, observations)\n                    \n                # Pretrain\n                metadata, metadata_info = self.get_metadata('Pretrain')\n                if dataselector['PretrainDataSelector']:\n                    metadata, metadata_info = dataselector['PretrainDataSelector'].fetch_data(dataset_info)\n                optimizer.pretrain(metadata, metadata_info)\n                \n                \n                metadata, metadata_info = self.get_metadata('Model')\n                if dataselector['ModelDataSelector']:\n                    metadata, metadata_info = dataselector['ModelDataSelector'].fetch_data(dataset_info)\n                optimizer.meta_fit(metadata, metadata_info)\n            \n                while (task_set.get_rest_budget()):\n                    optimizer.fit()\n                    suggested_samples = optimizer.suggest()\n                    parameters = [search_space.map_to_design_space(sample) for sample in suggested_samples]\n                    observations = task_set.f(parameters)\n                    if observations is None:\n                        break\n                    self.save_data(dataset_name, parameters, observations, self.process_info[pid]['iteration'])\n                    optimizer.observe(suggested_samples, observations)\n                    \n                    cur_iter = self.process_info[pid]['iteration']\n                    self.update_process_info(pid, {'iteration': cur_iter + 1})\n                    self.update_process_info(pid, {'progress': 100 * (task_set.get_cur_budget() - task_set.get_rest_budget()) / task_set.get_cur_budget()})\n                    logger.info(f\"PID {pid}: Seed {seed}, Task {task_set.get_curname()}, Iteration {self.process_info[pid]['iteration']}\")\n                task_set.roll()\n        except Exception as e:\n            logger.error(f\"Error in process {pid}: {str(e)}\")\n            raise e\n        finally:\n            self.update_process_info(pid, {'status': 'completed'})\n   \n    def terminate_task(self, pid):\n        with self.lock:\n            if pid in self.process_info:\n                dataset_name = self.process_info[pid].get('dataset_name')\n                try:\n                    os.kill(pid, signal.SIGTERM)\n                    logger.info(f\"Process {pid} has been terminated.\")\n                except Exception as e:\n                    logger.error(f\"Failed to terminate process {pid}: {str(e)}\")\n                if dataset_name:\n                    try:\n                        self.data_manager.remove_dataset(dataset_name)\n                        logger.info(f\"Dataset {dataset_name} associated with process {pid} has been deleted.\")\n                    except Exception as e:\n                        logger.error(f\"Failed to delete dataset {dataset_name}: {str(e)}\")\n                del self.process_info[pid]\n            else:\n                logger.warning(f\"No such process {pid} found in process info.\")\n    \n    def update_process_info(self, pid, updates):\n        with self.lock:\n            temp_info = self.process_info[pid].copy()\n            temp_info.update(updates)\n            self.process_info[pid] = temp_info\n        \n    def get_all_process_info(self):\n        return dict(self.process_info)\n    \n    def get_box_plot_data(self, task_names):\n        all_data = {}\n        for group_id, group in enumerate(task_names):\n            all_data[str(group_id)] = []\n            for task_name in group:\n                data = self.data_manager.db.select_data(task_name)\n                table_info = self.data_manager.db.query_dataset_info(task_name)\n                objectives = table_info[\"objectives\"]\n                obj = objectives[0][\"name\"]\n                try:\n                    best_obj = min([d[obj] for d in data])\n                except:\n                    pass\n                all_data[str(group_id)].append(best_obj)\n\n        return all_data\n    \n    \n    \n    def get_report_charts(self, task_name):\n        all_data = self.data_manager.db.select_data(task_name)\n\n        table_info = self.data_manager.db.query_dataset_info(task_name)\n        objectives = table_info[\"objectives\"]\n        ranges = [tuple(var['range']) for var in table_info[\"variables\"]]\n        initial_number = table_info[\"additional_config\"][\"initial_number\"]\n        obj = objectives[0][\"name\"]\n        obj_type = objectives[0][\"type\"]\n\n        obj_data = [data[obj] for data in all_data]\n        var_data = [[data[var[\"name\"]] for var in table_info[\"variables\"]] for data in all_data]\n        variables = [var[\"name\"] for var in table_info[\"variables\"]]\n        ret = {}\n        ret.update(self.construct_footprint_data(task_name, var_data, ranges, initial_number))\n        ret.update(self.construct_trajectory_data(task_name, obj_data, obj_type))\n        # ret.update(self.construct_importance_data(task_name, var_data, obj_data, variables))\n\n        return ret\n\n\n    def get_report_traj(self, task_name):\n        all_data = self.data_manager.db.select_data(task_name)\n\n        table_info = self.data_manager.db.query_dataset_info(task_name)\n        objectives = table_info[\"objectives\"]\n\n        obj = objectives[0][\"name\"]\n        obj_type = objectives[0][\"type\"]\n\n        obj_data = [data[obj] for data in all_data]\n        ret = {}\n        ret.update(self.construct_trajectory_data(task_name, obj_data, obj_type))\n\n        return ret\n\n    def construct_footprint_data(self, name, var_data, ranges, initial_number):\n        # Initialize the list to store trajectory data and the best value seen so far\n        fp = FootPrint(var_data, ranges)\n        fp.calculate_distances()\n        fp.get_mds()\n        scatter_data = {'Initial vectors': fp._reduced_data[:initial_number], 'Decision vectors': fp._reduced_data[initial_number:len(fp.X)], 'Boundary vectors': fp._reduced_data[len(fp.X):]}\n        # scatter_data = {}\n        return {\"ScatterData\": scatter_data}\n    \n    def construct_statistic_trajectory_data(self, task_names):\n        all_data = []\n        for group_id, group in enumerate(task_names):\n            min_data = {'name': f'Algorithm{group_id + 1}', 'average': [], 'uncertainty': []}\n            res = []\n            max_length = 0\n            for task_name in group:\n                data = self.data_manager.db.select_data(task_name)\n                table_info = self.data_manager.db.query_dataset_info(task_name)\n                objectives = table_info[\"objectives\"]\n                obj = objectives[0][\"name\"]\n                obj_data = [d[obj] for d in data]\n                acc_obj_data = np.minimum.accumulate(obj_data).flatten().tolist()\n                res.append(acc_obj_data)\n                if len(acc_obj_data) > max_length:\n                    max_length = len(acc_obj_data)\n\n            # 计算每个点的中位数和标准差\n            for i in range(max_length):\n                current_data = [r[i] for r in res if i < len(r)]\n                median = np.median(current_data)\n                std = np.std(current_data)\n                min_data['average'].append({'FEs': i + 1, 'y': median})\n                min_data['uncertainty'].append({'FEs': i + 1, 'y': [median - std, median + std]})\n\n            all_data.append(min_data)\n\n        return all_data\n        \n    \n    def construct_trajectory_data(self, name, obj_data, obj_type=\"minimize\"):\n        # Initialize the list to store trajectory data and the best value seen so far\n        trajectory = []\n        best_value = float(\"inf\") if obj_type == \"minimize\" else -float(\"inf\")\n        best_values_so_far = []\n\n        # Loop through each function evaluation\n        for index, current_value in enumerate(obj_data, start=1):\n            # Update the best value based on the objective type\n            if obj_type == \"minimize\":\n                if current_value < best_value:\n                    best_value = current_value\n            else:  # maximize\n                if current_value > best_value:\n                    best_value = current_value\n\n            # Append the best value observed so far to the list\n            best_values_so_far.append(best_value)\n            trajectory.append({\"FEs\": index, \"y\": best_value})\n\n        uncertainty = []\n        for data_point in trajectory:\n            base_value = data_point[\"y\"]\n            uncertainty_range = [base_value, base_value]\n            uncertainty.append({\"FEs\": data_point[\"FEs\"], \"y\": uncertainty_range})\n\n        trajectory_data = {\n            \"name\": name,\n            \"average\": trajectory,\n            \"uncertainty\": uncertainty,\n        }\n\n        return {\"TrajectoryData\": [trajectory_data]}\n\n    def construct_importance_data(self, name, var_data, obj_data, variables):\n        # plot_network(np.array(var_data), np.array(obj_data), variables)\n        return {}\n\n    def get_configuration(self):\n        configuration_info = {}\n        configuration_info[\"tasks\"] = self.tasks_info\n        configuration_info[\"optimizer\"] = self.running_config.optimizer\n        configuration_info[\"datasets\"] = self.running_config.metadata\n        return configuration_info\n"
  },
  {
    "path": "transopt/agent/testood.py",
    "content": "import logging\nimport time\nfrom typing import Dict, Union\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torchvision\nimport tqdm\nimport matplotlib.pyplot as plt\n\nfrom torchvision import datasets, transforms\n\nfrom transopt.agent.registry import problem_registry\nfrom transopt.benchmark.problem_base.non_tab_problem import NonTabularProblem\nfrom transopt.optimizer.sampler.random import RandomSampler\nfrom transopt.space.fidelity_space import FidelitySpace\nfrom transopt.space.search_space import SearchSpace\nfrom transopt.space.variable import *\nfrom transopt.utils.openml_data_manager import OpenMLHoldoutDataManager\nfrom transopt.datamanager.database import Database\nfrom services import Services\n\nfrom transopt.benchmark.HPO.HPOCNN import *\n\nimport os\nimport sys\nimport unittest\nfrom pathlib import Path\n\ndef plot_acc_scatter(train_acc, test_acc):\n    # Create a scatter plot\n    plt.scatter(train_acc, test_acc, label='Accuracy Points')\n    \n    # Plot the diagonal line\n    min_val = min(min(train_acc), min(test_acc))\n    max_val = max(max(train_acc), max(test_acc))\n    plt.plot([min_val, max_val], [min_val, max_val], 'r--', label='Diagonal Line')\n    \n    # Add labels and title\n    plt.xlabel('Train Accuracy')\n    plt.ylabel('Test Accuracy')\n    plt.title('Train vs Test Accuracy')\n    plt.legend()\n    \n    # Show the plot\n    plt.savefig('./train vs test accuracy.png')\n\n\n\n\n\ncurrent_dir = Path(__file__).resolve().parent\npackage_dir = current_dir.parent\nsys.path.insert(0, str(package_dir))\n\ndef setUp():\n    db = Database(\"database.db\")\n    table_name = \"test_table\"\n        \ndef list_pth_files(directory):\n    # Create an empty list to store .pth file paths\n    pth_files = []\n\n    # Walk through the directory\n    for root, dirs, files in os.walk(directory):\n        for file in files:\n            # Check if the file ends with .pth\n            if file.endswith('.pth'):\n                # Create the full path to the file\n                file_path = os.path.join(root, file)\n                # Add the file path to the list\n                pth_files.append(file_path)\n\n    return pth_files\n\nif __name__ == \"__main__\":\n\n    services = Services(None, None, None)\n    task_name = []\n    parameters = []\n    # tables = services.get_experiment_datasets()\n    # for table in tables:\n    #     print(table[1]['data_number'])\n    #     if table[1]['data_number'] == 100:\n    #         task_name = table[0]\n    #         print(task_name)\n\n    #         all_data = services.data_manager.db.select_data(task_name)\n    #         table_info = services.data_manager.db.query_dataset_info(task_name)\n                    \n    #         objectives = table_info[\"objectives\"]\n    #         ranges = [tuple(var['range']) for var in table_info[\"variables\"]]\n    #         initial_number = table_info[\"additional_config\"][\"initial_number\"]\n    #         obj = objectives[0][\"name\"]\n    #         obj_type = objectives[0][\"type\"]\n\n    #         obj_data = [data[obj] for data in all_data]\n    #         max_id = np.argmax(obj_data)\n            \n    #         var_data = [[data[var[\"name\"]] for var in table_info[\"variables\"]] for data in all_data]\n    #         variables = [var[\"name\"] for var in table_info[\"variables\"]]\n    #         ret = {}\n    #         traj = services.construct_trajectory_data(task_name, obj_data, obj_type=\"maximize\")\n    #         best_var = var_data[max_id]\n    #         lr = np.exp2(best_var[0])\n    #         momentum = best_var[1]\n    #         weight_decay = np.exp2(best_var[2])\n    #         parameters.append((lr, momentum, weight_decay))\n    \n    \n    \n    if torch.cuda.is_available():\n        device = torch.device(\"cuda\")\n        torch.cuda.set_device(1)\n    else:\n        device = torch.device(\"cpu\")\n        \n    trainset = datasets.MNIST(\n        root=\"./data\", train=True, download=True, transform=transforms.Compose(\n            [\n                BGRed(),\n                \n                transforms.ToTensor(),\n                transforms.Resize((32, 32)),\n                transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n            ]\n        )\n    )\n    testset = datasets.MNIST(\n        root=\"./data\", train=False, download=True, transform=transforms.Compose(\n            [\n                BGRed(),\n                \n                transforms.ToTensor(),\n                transforms.Resize((32, 32)),\n                transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n            ]\n        )\n    )\n    \n    \n    trainloader = torch.utils.data.DataLoader(\n        trainset, batch_size=64, shuffle=True\n    )\n    testloader = torch.utils.data.DataLoader(\n        testset, batch_size=64, shuffle=False\n    )\n    epochs = 30\n    batch_size = 64\n\n    # lr = 0.0017607943222948076\n    # momentum = 0.6997583600209312\n    # weight_decay = 0.004643925899318933\n    \n    # lr = parameters[0][0]\n    # momentum = parameters[0][1]\n    # weight_decay = parameters[0][2]\n    # print(lr, momentum, weight_decay)\n    \n    directory = './temp_model/CNN_101/'  # Replace with the path to your directory\n    pth_files = list_pth_files(directory)\n    train_acc = []\n    test_acc = []\n\n    net = Learner(target_classes=10).to(device)\n    for model_name in pth_files:\n        print(model_name)\n        net.load_state_dict(torch.load(f'{model_name}'))\n    # criterion = nn.NLLLoss()\n    # optimizer = optim.SGD(\n    #     net.parameters(),\n    #     lr=lr,\n    #     momentum=momentum,\n    #     weight_decay = weight_decay,\n    # )\n    # start_time = time.time()\n        correct = 0\n        total = 0\n        with torch.no_grad():\n            for data in trainloader:\n                images, labels = data[0].to(device), data[1].to(device)\n                outputs = net(images)\n                _, predicted = torch.max(outputs.data, 1)\n                total += labels.size(0)\n                correct += (predicted == labels).sum().item()\n\n            \n            accuracy = correct / total\n            print(\"Training Accuracy: %.2f %%\" % (100 * accuracy))\n            train_acc.append(accuracy * 100)\n\n            # print(\"Epoch %d, Loss: %.3f\" % (e + 1, running_loss / len(trainloader)))\n\n        correct = 0\n        total = 0\n        import os\n\n\n\n        with torch.no_grad():\n            for data in testloader:\n                images, labels = data[0].to(device), data[1].to(device)\n                outputs = net(images)\n                _, predicted = torch.max(outputs.data, 1)\n                total += labels.size(0)\n                correct += (predicted == labels).sum().item()\n\n        accuracy = correct / total\n        end_time = time.time()\n        test_acc.append(accuracy * 100)\n        print(\"Test Accuracy: %.2f %%\" % (100 * accuracy))\n        \n    plot_acc_scatter(train_acc, test_acc)\n\n\n            \n\n            \n        \n\n    # model_save_path = './temp_model/model.pth'\n    # torch.save(net.state_dict(), model_save_path)"
  },
  {
    "path": "transopt/analysis/compile_tex.py",
    "content": "import os\nimport subprocess\nimport shutil\n\n\ndef compile_tex(tex_path, output_folder):\n    # 保存当前工作目录\n    original_cwd = os.getcwd()\n\n    # 将路径转换为绝对路径\n    tex_path = os.path.abspath(tex_path)\n    output_folder = os.path.abspath(output_folder)\n\n    # 获取文件名和文件夹路径\n    folder, filename = os.path.split(tex_path)\n    name, _ = os.path.splitext(filename)\n\n    # 切换到tex文件所在的文件夹\n    os.chdir(folder)\n\n    try:\n        # 编译tex文件\n        subprocess.run(['pdflatex', filename], check=True)\n\n        # 裁剪PDF文件\n        pdf_path = os.path.join(folder, name + '.pdf')\n        cropped_pdf_path = pdf_path.replace('.pdf', '-crop.pdf')\n        subprocess.run(['pdfcrop', pdf_path, cropped_pdf_path], check=True)\n\n        # 将裁剪后的PDF文件移动到输出文件夹，并去掉-crop\n        output_pdf_path = os.path.join(output_folder, name + '.pdf')\n        shutil.move(cropped_pdf_path, output_pdf_path)\n\n    except subprocess.CalledProcessError as e:\n        print(f\"命令执行失败: {e}\")\n    finally:\n        # 切换回原始工作目录\n        os.chdir(original_cwd)\n\n    # 删除.aux和.log文件以及未裁剪的PDF文件\n    aux_path = os.path.join(folder, name + '.aux')\n    log_path = os.path.join(folder, name + '.log')\n    if os.path.exists(aux_path):\n        os.remove(aux_path)\n    if os.path.exists(log_path):\n        os.remove(log_path)\n    if os.path.exists(pdf_path):\n        os.remove(pdf_path)"
  },
  {
    "path": "transopt/analysis/effect_size.py",
    "content": "import os\nimport json\nimport numpy as np\nfrom transopt.utils.sk import Rx\nfrom matplotlib import pyplot as plt\n\n\nplot_dim = 7\nfile_path = f\"/home/gsfall/data_files/synthetic/{plot_dim}d\"\n\n# file_path = f\"/home/gsfall/data_files/SVM\"\n\nplot_tasks = [\"Discus\", \"GriewankRosenbrock\", \"Rastrigin\", \"Rosenbrock\", \"Schwefel\"]\n# plot_tasks = [\"SVM\"]\nrank = {}\nfor plot_task in plot_tasks:\n    data_dict = {}\n    for file_name in os.listdir(file_path):\n        if file_name.endswith(\".json\"):\n            parts = file_name.split(\"_\")\n            task = parts[0]\n            method = \"_\".join(parts[1:]).split(\".\")[0]\n            if task == plot_task:\n                with open(os.path.join(file_path, file_name), \"r\") as f:\n                    data = json.load(f)\n                    data_dict[method] = data[\"m\"]\n    \n    Rx_data = Rx.data(**data_dict)\n    result = Rx.sk(Rx_data)\n    for r in result:\n        if r.rx in rank:\n            rank[r.rx].append(r.rank)\n        else:\n            rank[r.rx] = [r.rank]\n\nfile_name = \"rank.json\"\nwith open(os.path.join(file_path, file_name), \"w\") as json_file:\n    json.dump(rank, json_file)\npass"
  },
  {
    "path": "transopt/analysis/mds.py",
    "content": "import numpy as np\nfrom sklearn.manifold import MDS\nfrom scipy.spatial.distance import pdist, squareform\nimport matplotlib.pyplot as plt\nimport itertools\n\nclass FootPrint:\n    def __init__(self, X, range):\n        self.X = X\n        self.ranges = range\n        self.boundary_points = self.get_random_boundary_points(0)\n        self.config_ids = np.arange(0, len(self.X) + len(self.boundary_points)).tolist()\n        self.n_configs = len(self.config_ids)\n        \n        self._distance = None\n        self._reduced_data = None\n        \n\n    def calculate_distances(self):\n        \"\"\"\n        Calculate pairwise distances between configurations.\n\n        Parameters:\n        X (np.ndarray): Encoded data matrix.\n\n        Returns:\n        np.ndarray: Pairwise distances matrix.\n        \"\"\"\n        distances = np.zeros((self.n_configs, self.n_configs))\n        configs = np.vstack((self.X, self.boundary_points))\n        for i in range(self.n_configs):\n            for j in range(i + 1, self.n_configs):\n                distances[i, j] = distances[j, i] = np.linalg.norm(configs[i] - configs[j])\n\n        self._distances = distances\n\n    def init_distances(self, config_ids, exclude_configs=False):\n        \"\"\"\n        Initialize pairwise distances between configurations.\n\n        Parameters:\n        X (np.ndarray): Encoded data matrix.\n        config_ids (List[int]): Corresponding config_ids.\n        exclude_configs (bool): Whether to exclude the passed X. Default is False.\n\n        Returns:\n        np.ndarray: Pairwise distances matrix.\n        \"\"\"\n        if not exclude_configs:\n            self.calculate_distances()\n        else:\n            return np.zeros((0, 0))\n\n    def update_distances(self, X, distances, config, rejection_threshold=0.0):\n        \"\"\"\n        Update pairwise distances with a new configuration.\n\n        Parameters:\n        X (np.ndarray): Encoded data matrix.\n        distances (np.ndarray): Pairwise distances matrix.\n        config (np.ndarray): New configuration to add.\n        rejection_threshold (float): Threshold for rejecting the config. Default is 0.0.\n\n        Returns:\n        bool: Whether the config was rejected or not.\n        \"\"\"\n        n_configs = X.shape[0]\n        new_distances = np.zeros((n_configs + 1, n_configs + 1))\n        rejected = False\n\n        if n_configs > 0:\n            new_distances[:n_configs, :n_configs] = distances[:, :]\n            for j in range(n_configs):\n                d = np.linalg.norm(X[j] - config)\n                if rejection_threshold is not None:\n                    if d < rejection_threshold:\n                        rejected = True\n                        break\n\n                new_distances[n_configs, j] = new_distances[j, n_configs] = d\n\n        if not rejected:\n            X = np.vstack((X, config))\n            distances = new_distanceslist\n\n        return rejected\n        \n    def get_random_boundary_points(self, num_samples):\n        num_dims = len(self.ranges)\n    \n        combinations = list(itertools.product(*self.ranges))\n        \n        # random_boundary_indices = np.random.choice(len(combinations), num_samples, replace=False)\n        # random_boundary_points = [combinations[i] for i in random_boundary_indices]\n\n        return np.array(combinations)\n\n        \n    def get_mds(self):\n\n        if self._distances is None:\n            raise RuntimeError(\"You need to call `calculate` first.\")\n\n        mds = MDS(n_components=2, dissimilarity=\"precomputed\", random_state=0)\n        self._reduced_data =  mds.fit_transform(self._distances).tolist()\n    \n    def plot_embedding(self):\n        \"\"\"\n        Plot the low-dimensional embedding.\n\n        \"\"\"\n        plt.figure(figsize=(8, 6))\n        plt.scatter(self._reduced_data[:len(self.X), 0], self._reduced_data[:len(self.X), 1], c='b', label='MDS Embedding')\n        plt.scatter(self._reduced_data[len(self.X):, 0], self._reduced_data[len(self.X):, 1], c='r', marker= 'x', label='Boundary  points')\n\n        plt.xlabel('Component 1')\n        plt.ylabel('Component 2')\n        plt.title('MDS Embedding')\n        plt.legend()\n        plt.grid(True)\n        plt.show()\n\n\nif __name__ == '__main__':\n    # 示例数据\n    X = np.random.rand(100, 5)\n    bounds = [(0, 1), (0, 1),(0,1), (0,1), (0,1)]\n    fp = FootPrint(X, bounds)\n    fp.calculate_distances()\n    fp.get_mds()\n    fp.plot_embedding()"
  },
  {
    "path": "transopt/analysis/parameter_network.py",
    "content": "import os\nimport numpy as np\nimport pandas as pd\nimport networkx as nx\nimport matplotlib.pyplot as plt\nfrom itertools import combinations\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.tree import DecisionTreeRegressor\n\n\ndef calculate_importances(X, y):\n    \"\"\"\n    Calculates and returns parameter importances.\n    \"\"\"\n\n    model = DecisionTreeRegressor()\n    model.fit(X, y[:, np.newaxis])\n    feature_importances = model.feature_importances_\n\n    return feature_importances\n\n\ndef calculate_interaction(X, y):\n    num_parameters = X.shape[1]\n    h_matrix = np.zeros((num_parameters, num_parameters))\n\n    # 训练单个变量的模型\n    single_models = []\n    for i in range(num_parameters):\n        model = RandomForestRegressor(n_estimators=50, random_state=42)\n        model.fit(X[:, [i]], y)\n        single_models.append(model)\n\n    # 两两特征组合，计算 H^2\n    for (i, j) in combinations(range(num_parameters), 2):\n        model_jk = RandomForestRegressor(n_estimators=50, random_state=42)\n        model_jk.fit(X[:, [i, j]], y)\n        f_jk = model_jk.predict(X[:, [i, j]])\n\n        f_j = single_models[i].predict(X[:, [i]])\n        f_k = single_models[j].predict(X[:, [j]])\n\n        numerator = np.sqrt(np.sum((f_jk - f_j - f_k) ** 2))\n\n        h_matrix[i, j] = numerator\n        h_matrix[j, i] = h_matrix[i, j]\n\n    mean = np.mean(h_matrix)\n    std = np.std(h_matrix)\n\n    normalized_matrix = (h_matrix - mean) / std\n    scaled_matrix = 1 / (1 + np.exp(-normalized_matrix))\n    \n    return scaled_matrix\n\n\ndef plot_network(X, y, nodes):\n    G = nx.Graph()\n    fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(15, 9))\n    \n    nodes_weight = calculate_importances(X, y)\n    for node, weight in zip(nodes, nodes_weight):\n        G.add_node(node, weight=weight)\n\n    edges_weight = calculate_interaction(X, y)\n    \n    for i in range(len(nodes)):\n        for j in range(i + 1, len(nodes)):\n            weight = np.random.uniform(0, 1)  # 生成一个0到1之间的随机权重\n            G.add_edge(nodes[i], nodes[j], weight=weight)\n        \n        \n        \n    # for i in range(5):\n    #     for j in range(i + 1, 5):\n    #         weight = edges_weight[i, j]\n    #         G.add_edge(nodes[i], nodes[j], weight=weight)\n    \n    # 设置节点的位置为圆形布局\n    pos = nx.circular_layout(G)\n\n    # 创建颜色映射\n    node_cmap = plt.cm.Greens\n    edge_cmap = plt.cm.Blues\n\n    # 节点的颜色根据权重映射\n    node_color = [node_cmap(data['weight']) for v, data in G.nodes(data=True)]\n    node_size = [data['weight'] * 2000 + 1000 for v, data in G.nodes(data=True)]\n    node_alpha = [data['weight'] for v, data in G.nodes(data=True)]  # 透明度根据权重调整\n\n    # 绘制网络图\n    edges = G.edges(data=True)\n    nx.draw_networkx_nodes(G, pos, node_color=node_color, node_size=node_size, alpha=node_alpha)\n    nx.draw_networkx_labels(G, pos, font_color='white', font_size=16)\n\n    # 单独绘制每条边，设置颜色和透明度\n    for u, v, data in edges:\n        color = edge_cmap(data['weight'])\n        nx.draw_networkx_edges(G, pos, edgelist=[(u, v)], width=3, alpha=data['weight'], edge_color=[color])\n\n    fig.set_facecolor(\"None\")\n    ax.set_facecolor(\"#191C36\")\n    # ax.axis('off')\n\n    path = os.getcwd()\n    save_path = os.path.join(path, \"webui/src/pictures/parameter_network.png\")\n    plt.savefig(save_path, bbox_inches='tight')\n    plt.clf()\n    plt.close()\n    # 显示图形\n    # plt.show()\n\n\n\nif __name__ == \"__main__\":\n    np.random.seed(0)\n    X = np.random.normal(0, 1, (100, 5))  # 5个特征\n    y = 5 * X[:, 0] * X[:, 1] + 3 * X[:, 2] + X[:, 3] + np.random.normal(0, 0.5, 100)\n    plot_network(X, y, nodes=['X1', 'X2', 'X3', 'X4', 'X5'])"
  },
  {
    "path": "transopt/analysis/table.py",
    "content": "import numpy as np\nfrom collections import defaultdict\nfrom transopt.utils.sk import Rx\nimport scipy\nimport os\nfrom multiprocessing import Process, Manager\nfrom transopt.analysis.table_to_latex import matrix_to_latex\nfrom transopt.analysis.compile_tex import compile_tex\nfrom transopt.agent.services import Services\n\n\nclass Result():\n    def __init__(self):\n        self.X = None\n        self.Y = None\n        self.best_X = None\n        self.best_Y = None\n\n\ndef get_results(task_names):\n    manager = Manager()\n    task_queue = manager.Queue()\n    result_queue = manager.Queue()\n    db_lock = manager.Lock()\n    services = Services(task_queue, result_queue, db_lock)\n\n    results = {}\n    methods = []\n    tasks = []\n    for group_id, group in enumerate(task_names):\n        for task_name in group:\n            r = Result()\n            table_info = services.data_manager.db.query_dataset_info(task_name)\n            task = table_info['additional_config']['problem_name']\n            method = table_info['additional_config']['Model']\n            seed = table_info['additional_config']['seeds']\n            if method not in methods:\n                methods.append(method)\n            if task not in tasks:\n                tasks.append(task)\n            \n            all_data = services.data_manager.db.select_data(task_name)\n            objectives = table_info[\"objectives\"]\n            obj = objectives[0][\"name\"]\n            obj_data = [data[obj] for data in all_data]\n            var_data = [[data[var[\"name\"]] for var in table_info[\"variables\"]] for data in all_data]\n            r.X = np.array(var_data)\n            r.Y = np.array(obj_data)\n            best_id = np.argmin(r.Y)\n            r.best_X = r.X[best_id]\n            r.best_Y = r.Y[best_id]\n            if task not in results:\n                results[task] = defaultdict(dict)\n            if method not in results[task]:\n                results[task][method] = defaultdict(dict)\n            results[task][method][seed] = r\n\n    return results, methods, tasks\n\n\n\ndef record_mean_std(task_names, save_path, **kwargs):\n    # Similar to record_mean_std function in PeerComparison.py\n    res_mean = {}\n    res_std = {}\n    res_sig = {}\n    results, methods, tasks = get_results(task_names)\n    for task_name, task_r in results.items():\n        result_mean = []\n        result_std = []\n        data = {}\n        data_mean = {}\n        for method, method_r in task_r.items():\n            best = []\n            for seed, result_obj in method_r.items():\n                best.append(result_obj.best_Y)\n                data[method] = best.copy()\n                data_mean[method] = (np.mean(best), np.std(best))\n                result_mean.append(np.mean(best))\n                result_std.append(np.std(best))\n\n        res_mean[task_name] = result_mean\n        res_std[task_name] = result_std\n        rst_m = {}\n        sorted_dic = sorted(data_mean.items(), key=lambda kv: (kv[1][0]))\n        for method in methods:\n            if method == sorted_dic[0][0]:\n                rst_m[method] = '-'\n                continue\n            s, p = scipy.stats.mannwhitneyu(data[sorted_dic[0][0]], data[method], alternative='two-sided')\n            if p < 0.05:\n                rst_m[method] = '+'\n            else:\n                rst_m[method] = '-'\n        res_sig[task_name] = rst_m\n    latex_code = matrix_to_latex({'mean':res_mean, 'std':res_std, 'significance':res_sig}, tasks, methods,\n                                 caption='Performance comparisons of the quality of solutions obtained by different algorithms.')\n    save_path = os.path.join(save_path, 'Overview')\n    os.makedirs(save_path, exist_ok=True)\n    tex_save_path = os.path.join(save_path, 'tex')\n    os.makedirs(tex_save_path, exist_ok=True)\n    table_path = os.path.join(save_path, 'Table')\n    os.makedirs(table_path, exist_ok=True)\n    \n    with open(os.path.join(tex_save_path, 'compare_mean.tex'), 'w') as f:\n        f.write(latex_code)\n    try:\n        compile_tex(os.path.join(tex_save_path, 'compare_mean.tex'), table_path)\n    except:\n        pass\n\n    print(f\"LaTeX code has been saved to {tex_save_path}\")\n\n\nif __name__ == \"__main__\":\n    task_names = [['Sphere_w1_s1_1715591439', 'Sphere_w1_s1_1715592120']]\n    save_path = '/home/gsfall'\n    record_mean_std(task_names, save_path)"
  },
  {
    "path": "transopt/analysis/table_to_latex.py",
    "content": "import numpy as np\nfrom typing import Union, Dict\n\n\ndef matrix_to_latex(Data: Dict, col_names, row_names, caption, oder=\"min\"):\n    mean = Data[\"mean\"]\n    std = Data[\"std\"]\n    significance = Data[\"significance\"]\n    num_cols = len(mean.keys())\n    num_rows = len(row_names)\n\n    if len(col_names) != num_cols or len(row_names) != num_rows:\n        raise ValueError(\n            \"Mismatch between matrix dimensions and provided row/column names.\"\n        )\n\n    latex_code = []\n    # 添加文档类和宏包\n    latex_code.append(\"\\\\documentclass{article}\")\n    latex_code.append(\"\\\\usepackage{geometry}\")\n    latex_code.append(\"\\\\geometry{a4paper, margin=1in}\")\n    latex_code.append(\"\\\\usepackage{graphicx}\")\n    latex_code.append(\"\\\\usepackage{colortbl}\")\n    latex_code.append(\"\\\\usepackage{booktabs}\")\n    latex_code.append(\"\\\\usepackage{threeparttable}\")\n    latex_code.append(\"\\\\usepackage{caption}\")\n    latex_code.append(\"\\\\usepackage{xcolor}\")\n    latex_code.append(\"\\\\pagestyle{empty}\")\n\n    # 开始文档\n    latex_code.append(\"\\\\begin{document}\")\n    latex_code.append(\"\")\n    latex_code.append(\"\\\\begin{table*}[t!]\")\n    latex_code.append(\"    \\\\scriptsize\")\n    latex_code.append(\"    \\\\centering\")\n    latex_code.append(f\"    \\\\caption{{{caption}}}\")\n    latex_code.append(\"    \\\\resizebox{1.0\\\\textwidth}{!}{\")\n    latex_code.append(\"    \\\\begin{tabular}{c|\" + \"\".join([\"c\"] * (num_rows)) + \"}\")\n    latex_code.append(\"        \\\\hline\")\n\n    # Adding column names\n    col_header = \" & \".join([\"\"] + row_names) + \" \\\\\\\\\"\n    latex_code.append(\"        \" + col_header)\n    latex_code.append(\"        \\\\hline\")\n\n    # Adding rows\n    for i in range(num_cols):\n        str_data = []\n        for j in range(num_rows):\n            str_format = \"\"\n            if oder == \"min\":\n                if mean[col_names[i]][j] == np.min(mean[col_names[i]]):\n                    str_format += \"\\cellcolor[rgb]{ .682,  .667,  .667}\\\\textbf{\"\n                    str_format += \"%.3E(%.3E)\" % (\n                        float(mean[col_names[i]][j]),\n                        std[col_names[i]][j],\n                    )\n                    str_format += \"}\"\n                    str_data.append(str_format)\n                else:\n                    if significance[col_names[i]][row_names[j]] == \"+\":\n                        str_data.append(\n                            \"%.3E(%.3E)$^\\dagger$\"\n                            % (float(mean[col_names[i]][j]), std[col_names[i]][j])\n                        )\n                    else:\n                        str_data.append(\n                            \"%.3E(%.3E)\"\n                            % (float(mean[col_names[i]][j]), std[col_names[i]][j])\n                        )\n            else:\n                if mean[col_names[i]][j] == np.max(mean[col_names[i]]):\n                    str_format += \"\\cellcolor[rgb]{ .682,  .667,  .667}\\\\textbf{\"\n                    str_format += \"%.3E(%.3E)\" % (\n                        float(mean[col_names[i]][j]),\n                        std[col_names[i]][j],\n                    )\n                    str_format += \"}\"\n                    str_data.append(str_format)\n                else:\n                    if significance[col_names[i]][row_names[j]] == \"+\":\n                        str_data.append(\n                            \"%.3E(%.3E)$^\\dagger$\"\n                            % (float(mean[col_names[i]][j]), std[col_names[i]][j])\n                        )\n                    else:\n                        str_data.append(\n                            \"%.3E(%.3E)\"\n                            % (float(mean[col_names[i]][j]), std[col_names[i]][j])\n                        )\n        test_name = col_names[i] + col_names[i]\n        row_data = \" & \".join([\"\\\\texttt{\" + f\"{test_name}\" + \"}\"] + str_data) + \" \\\\\\\\\"\n        latex_code.append(\"        \" + row_data)\n\n    latex_code.append(\"        \\\\hline\")\n    latex_code.append(\"    \\\\end{tabular}\")\n    latex_code.append(\"    }\")\n    latex_code.append(\"    \\\\begin{tablenotes}\")\n    latex_code.append(\"        \\\\tiny\")\n    latex_code.append(\n        \"        \\\\item The labels in the first column are the combination of the first letter of test problem and the number of variables, e.g., A4 is Ackley problem with $n=4$.\"\n    )\n    latex_code.append(\n        \"        \\\\item $^\\\\dagger$ indicates that the best algorithm is significantly better than the other one according to the Wilcoxon signed-rank test at a 5\\\\% significance level.\"\n    )\n    latex_code.append(\"    \\\\end{tablenotes}\")\n    latex_code.append(\"\\\\end{table*}%\")\n    latex_code.append(\"\\\\end{document}\")\n\n    return \"\\n\".join(latex_code)\n"
  },
  {
    "path": "transopt/benchmark/CPD/__init__.py",
    "content": "from transopt.benchmark.CPD.PCM.pcm import PCM\nfrom transopt.benchmark.CPD.Absolut.absolut import Absolut"
  },
  {
    "path": "transopt/benchmark/CSSTuning/Compiler.py",
    "content": "import numpy as np\nfrom csstuning.compiler.compiler_benchmark import GCCBenchmark, LLVMBenchmark\n\nfrom transopt.agent.registry import problem_registry\nfrom transopt.benchmark.problem_base.non_tab_problem import NonTabularProblem\nfrom transopt.space.fidelity_space import FidelitySpace\nfrom transopt.space.search_space import SearchSpace\nfrom transopt.space.variable import *\n\n\n@problem_registry.register(\"Compiler_GCC\")\nclass GCCTuning(NonTabularProblem):\n    problem_type = 'compiler'\n    workloads = GCCBenchmark.AVAILABLE_WORKLOADS\n    num_variables = 104\n    num_objectives = 3\n    fidelity = None\n    \n    def __init__(self, task_name, budget_type, budget, seed, workload, knobs=None, **kwargs):        \n        self.workload = workload or GCCBenchmark.AVAILABLE_WORKLOADS[0]\n        self.benchmark = GCCBenchmark(workload=self.workload)\n        \n        all_knobs = self.benchmark.get_config_space()\n        self.knobs = {k: all_knobs[k] for k in (knobs or all_knobs)}\n        self.num_variables = len(self.knobs)\n        \n        super().__init__(\n            task_name=task_name,\n            budget=budget,\n            budget_type=budget_type,\n            seed=seed,\n            workload=workload,\n        )\n        np.random.seed(seed)\n\n    def get_configuration_space(self):\n        variables = []\n        for knob_name, knob_details in self.knobs.items():\n            knob_type = knob_details[\"type\"]\n            range_ = knob_details[\"range\"]\n            \n            if knob_type == \"enum\":\n                variables.append(Categorical(knob_name, range_))\n            elif knob_type == \"integer\":\n                variables.append(Integer(knob_name, range_))\n\n        return SearchSpace(variables)\n    \n    def get_fidelity_space(self):\n        return FidelitySpace([])\n    \n    def get_objectives(self) -> dict:\n        return {\n            \"execution_time\": \"minimize\",\n            \"compilation_time\": \"minimize\",\n            \"file_size\": \"minimize\",\n            # \"maxrss\": \"minimize\",\n            # \"PAPI_TOT_CYC\": \"minimize\",\n            # \"PAPI_TOT_INS\": \"minimize\",\n            # \"PAPI_BR_MSP\": \"minimize\",\n            # \"PAPI_BR_PRC\": \"minimize\",\n            # \"PAPI_BR_CN\": \"minimize\",\n            # \"PAPI_MEM_WCY\": \"minimize\",\n        }\n    \n    def get_problem_type(self):\n        return self.problem_type\n    \n    def objective_function(self, configuration: dict, fidelity = None, seed = None, **kwargs):        \n        try:\n            perf = self.benchmark.run(configuration)\n            return {obj: perf.get(obj, 1e10) for obj in self.get_objectives()}\n        except Exception as e:\n            return {obj: 1e10 for obj in self.get_objectives()}\n        \n\n\n@problem_registry.register(\"Compiler_LLVM\")\nclass LLVMTuning(NonTabularProblem):\n    problem_type = 'compiler'\n    workloads = LLVMBenchmark.AVAILABLE_WORKLOADS\n    num_variables = 82\n    num_objectives = 3\n    fidelity = None\n    \n    def __init__(self, task_name, budget_type, budget, seed, workload, knobs=None, **kwargs):\n        self.workload = workload or LLVMBenchmark.AVAILABLE_WORKLOADS[0]\n        self.benchmark = LLVMBenchmark(workload=self.workload)\n        \n        all_knobs = self.benchmark.get_config_space()\n        self.knobs = {k: all_knobs[k] for k in (knobs or all_knobs)}\n        self.num_variables = len(self.knobs)\n        \n        super().__init__(\n            task_name=task_name,\n            budget=budget,\n            budget_type=budget_type,\n            seed=seed,\n            workload=workload,\n        )\n        np.random.seed(seed)\n\n    def get_configuration_space(self):\n        variables = []\n        for knob_name, knob_details in self.knobs.items():\n            knob_type = knob_details[\"type\"]\n            range_ = knob_details[\"range\"]\n            \n            if knob_type == \"enum\":\n                variables.append(Categorical(knob_name, range_))\n            elif knob_type == \"integer\":\n                variables.append(Integer(knob_name, range_))\n\n        return SearchSpace(variables)\n    \n    def get_fidelity_space(self):\n        return FidelitySpace([])\n    \n    def get_objectives(self) -> dict:\n        return {\n            \"execution_time\": \"minimize\",\n            \"compilation_time\": \"minimize\",\n            \"file_size\": \"minimize\",\n        }\n    \n    def get_problem_type(self):\n        return self.problem_type\n    \n    def objective_function(self, configuration: dict, fidelity = None, seed = None, **kwargs): \n        try:\n            perf = self.benchmark.run(configuration)\n            return {obj: perf.get(obj, 1e10) for obj in self.get_objectives()}\n        except Exception as e:\n            return {obj: 1e10 for obj in self.get_objectives()}\n\n\nif __name__ == \"__main__\":\n    benchmark = GCCBenchmark(workload=\"cbench-automotive-bitcount\")\n    conf = {\n        \n    }\n    benchmark.run(conf)\n\n"
  },
  {
    "path": "transopt/benchmark/CSSTuning/DBMS.py",
    "content": "import numpy as np\nfrom csstuning.dbms.dbms_benchmark import MySQLBenchmark\n\nfrom transopt.agent.registry import problem_registry\nfrom transopt.benchmark.problem_base.non_tab_problem import NonTabularProblem\nfrom transopt.space.fidelity_space import FidelitySpace\nfrom transopt.space.search_space import SearchSpace\nfrom transopt.space.variable import *\n\n\n@problem_registry.register(\"DBMS_MySQL\")\nclass MySQLTuning(NonTabularProblem):\n    problem_type = 'dbms'\n    workloads = MySQLBenchmark.AVAILABLE_WORKLOADS\n    num_variables = 197\n    num_objectives = 2\n    fidelity = None\n    \n    def __init__(self, task_name, budget_type, budget, seed, workload, knobs=None, **kwargs):        \n        self.workload = workload or MySQLBenchmark.AVAILABLE_WORKLOADS[0]\n\n        self.benchmark = MySQLBenchmark(workload=self.workload)\n        self.knobs = self.benchmark.get_config_space()\n        self.num_variables = len(self.knobs)\n        \n        super().__init__(task_name, budget_type, budget, workload, seed)\n        np.random.seed(seed)\n\n\n    def get_configuration_space(self):\n        variables = []\n        \n        for knob_name, knob_details in self.knobs.items():\n            knob_type = knob_details[\"type\"]\n            range_ = knob_details[\"range\"]\n            \n            if knob_type == \"enum\":\n                variables.append(Categorical(knob_name, range_))\n            elif knob_type == \"integer\":\n                if range_[1] > np.iinfo(np.int64).max:\n                    variables.append(ExponentialInteger(knob_name, range_))\n                else:\n                    variables.append(Integer(knob_name, range_))\n\n        return SearchSpace(variables)\n    \n    def get_fidelity_space(self):\n        return FidelitySpace([])\n    \n    def get_objectives(self) -> dict:\n        return {\n            \"latency\": \"minimize\",\n            \"throughput\": \"maximize\",\n        }\n        \n    def get_problem_type(self):\n        return self.problem_type\n        \n    def objective_function(self, configuration: dict, fidelity = None, seed = None, **kwargs):\n        try:\n            perf = self.benchmark.run(configuration)\n            return {obj: perf.get(obj, 1e10) for obj in self.get_objectives()}\n        except Exception as e:\n            return {obj: 1e10 for obj in self.get_objectives()}\n\nif __name__ == \"__main__\":\n    # a = DBMSTuning(\"1\", 121, 0, 1)\n    pass\n"
  },
  {
    "path": "transopt/benchmark/CSSTuning/__init__.py",
    "content": "from transopt.benchmark.CSSTuning.Compiler import GCCTuning\nfrom transopt.benchmark.CSSTuning.DBMS import MySQLTuning"
  },
  {
    "path": "transopt/benchmark/DownloadBench/references",
    "content": "https://github.com/automl/HPOBench\n\nhttps://github.com/releaunifreiburg/HPO-B"
  },
  {
    "path": "transopt/benchmark/HBOROB/algorithms.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.autograd as autograd\n\nimport copy\nimport numpy as np\nfrom collections import OrderedDict\n\n\nfrom transopt.benchmark.HPOOOD import networks\nfrom transopt.benchmark.HPOOOD.misc import (\n    random_pairs_of_minibatches, split_meta_train_test, ParamDict,\n    MovingAverage, l2_between_dicts, proj, Nonparametric, SupConLossLambda\n)\n\n\n\nALGORITHMS = [\n    'ERM',\n]\n\ndef get_algorithm_class(algorithm_name):\n    \"\"\"Return the algorithm class with the given name.\"\"\"\n    if algorithm_name not in globals():\n        raise NotImplementedError(\"Algorithm not found: {}\".format(algorithm_name))\n    return globals()[algorithm_name]\n\nclass Algorithm(torch.nn.Module):\n    \"\"\"\n    A subclass of Algorithm implements a domain generalization algorithm.\n    Subclasses should implement the following:\n    - update()\n    - predict()\n    \"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(Algorithm, self).__init__()\n        self.hparams = hparams\n\n    def update(self, minibatches, unlabeled=None):\n        \"\"\"\n        Perform one update step, given a list of (x, y) tuples for all\n        environments.\n\n        Admits an optional list of unlabeled minibatches from the test domains,\n        when task is domain_adaptation.\n        \"\"\"\n        raise NotImplementedError\n\n    def predict(self, x):\n        raise NotImplementedError\n\n\nclass MLP(nn.Module):\n    \"\"\"Just  an MLP\"\"\"\n    def __init__(self, n_inputs, n_outputs, hparams):\n        super(MLP, self).__init__()\n        self.input = nn.Linear(n_inputs, hparams['mlp_width'])\n        self.dropout = nn.Dropout(hparams['mlp_dropout'])\n        self.hiddens = nn.ModuleList([\n            nn.Linear(hparams['mlp_width'], hparams['mlp_width'])\n            for _ in range(hparams['mlp_depth']-2)])\n        self.output = nn.Linear(hparams['mlp_width'], n_outputs)\n        self.n_outputs = n_outputs\n\n    def forward(self, x):\n        x = self.input(x)\n        x = self.dropout(x)\n        x = F.relu(x)\n        for hidden in self.hiddens:\n            x = hidden(x)\n            x = self.dropout(x)\n            x = F.relu(x)\n        x = self.output(x)\n        return x\n\n\nclass ResNet(torch.nn.Module):\n    \"\"\"ResNet with the softmax chopped off and the batchnorm frozen\"\"\"\n    def __init__(self, input_shape, hparams):\n        super(ResNet, self).__init__()\n        if hparams['resnet18']:\n            self.network = torchvision.models.resnet18(pretrained=True)\n            self.n_outputs = 512\n        else:\n            self.network = torchvision.models.resnet50(pretrained=True)\n            self.n_outputs = 2048\n\n        # self.network = remove_batch_norm_from_resnet(self.network)\n\n        # adapt number of channels\n        nc = input_shape[0]\n        if nc != 3:\n            tmp = self.network.conv1.weight.data.clone()\n\n            self.network.conv1 = nn.Conv2d(\n                nc, 64, kernel_size=(7, 7),\n                stride=(2, 2), padding=(3, 3), bias=False)\n\n            for i in range(nc):\n                self.network.conv1.weight.data[:, i, :, :] = tmp[:, i % 3, :, :]\n\n        # save memory\n        del self.network.fc\n        self.network.fc = Identity()\n\n        self.freeze_bn()\n        self.hparams = hparams\n        self.dropout = nn.Dropout(hparams['resnet_dropout'])\n\n    def forward(self, x):\n        \"\"\"Encode x into a feature vector of size n_outputs.\"\"\"\n        return self.dropout(self.network(x))\n\n    def train(self, mode=True):\n        \"\"\"\n        Override the default train() to freeze the BN parameters\n        \"\"\"\n        super().train(mode)\n        self.freeze_bn()\n\n    def freeze_bn(self):\n        for m in self.network.modules():\n            if isinstance(m, nn.BatchNorm2d):\n                m.eval()\n\n\n\nclass ERM(Algorithm):\n    \"\"\"\n    Empirical Risk Minimization (ERM)\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(ERM, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams)\n        self.featurizer = networks.Featurizer(input_shape, self.hparams)\n        self.classifier = networks.Classifier(\n            self.featurizer.n_outputs,\n            num_classes,\n            self.hparams['nonlinear_classifier'])\n\n        self.network = nn.Sequential(self.featurizer, self.classifier)\n        self.optimizer = torch.optim.Adam(\n            self.network.parameters(),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay'],\n        )\n\n    def update(self, minibatches, unlabeled=None):\n        all_x = torch.cat([x for x, y in minibatches])\n        all_y = torch.cat([y for x, y in minibatches])\n        loss = F.cross_entropy(self.predict(all_x), all_y)\n\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        return {'loss': loss.item()}\n\n    def predict(self, x):\n        return self.network(x)\n"
  },
  {
    "path": "transopt/benchmark/HBOROB/hporobust.py",
    "content": "# Install robustbench if you haven't already\n# !pip install robustbench\n\nfrom robustbench.utils import load_model\nfrom robustbench.data import load_cifar10\nfrom robustbench.eval import benchmark\n\n# Step 1: Load a pre-trained robust model from RobustBench\nmodel = load_model(model_name='Standard', dataset='cifar10', threat_model='Linf')\n\n# Step 2: Load the CIFAR-10 test dataset\nx_test, y_test = load_cifar10(n_examples=1000)\n\n# Step 3: Evaluate the model's robustness\n# We will use the AutoAttack suite to evaluate the model.\nfrom robustbench.utils import clean_accuracy, AutoAttack\n\n# Evaluate clean accuracy\nclean_acc = clean_accuracy(model, x_test, y_test)\nprint(f'Clean accuracy: {clean_acc * 100:.2f}%')\n\n# Step 4: Perform adversarial evaluation using AutoAttack\nadversary = AutoAttack(model, norm='Linf', eps=8/255)\nadv_acc = adversary.run_standard_evaluation(x_test, y_test, bs=128)\n\nprint(f'Robust accuracy against AutoAttack: {adv_acc * 100:.2f}%')"
  },
  {
    "path": "transopt/benchmark/HBOROB/test.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom robustbench.data import load_cifar10, load_cifar10c\nfrom robustbench.utils import clean_accuracy, load_model\n\nimport transopt.benchmark.HPO.networks\nfrom transopt.benchmark.HPO.algorithms import ERM\nfrom transopt.benchmark.HPO.wide_resnet import WideResNet\n\n\n\n\n\n\n\n\n# 加载 CIFAR-10 数据集\nx_test, y_test = load_cifar10(load_cifar10='~/transopt_files/data/')\n\n# 转换为 Tensor\nx_test = torch.tensor(x_test, dtype=torch.float32)\ny_test = torch.tensor(y_test, dtype=torch.long)\n\nhparams = {\n    'lr': 0.001,\n    'weight_decay': 5e-4,\n    'nonlinear_classifier': True\n}\n\ninput_shape = (3, 32, 32)\nnum_classes = 10\nnum_domains = 1\n\nmodel = ERM(input_shape, num_classes, num_domains, hparams)\n\nfrom torch.utils.data import DataLoader, TensorDataset\n\n# 使用训练数据\ntrain_loader = DataLoader(TensorDataset(x_test, y_test), batch_size=64, shuffle=True)\n\n# 训练模型\nfor epoch in range(10):  # 训练10个epoch\n    for batch in train_loader:\n        minibatches = [(batch[0], batch[1])]\n        model.update(minibatches)\n\n    print(f\"Epoch {epoch + 1} completed\")\n    \n    \n\ncorruptions = ['fog']\nx_test, y_test = load_cifar10c(n_examples=1000, corruptions=corruptions, severity=5)\n\nfor model_name in ['Standard', 'Engstrom2019Robustness', 'Rice2020Overfitting',\n                   'Carmon2019Unlabeled']:\n    model = load_model(model_name, dataset='cifar10', threat_model='Linf')\n    acc = clean_accuracy(model, x_test, y_test)\n    print(f'Model: {model_name}, CIFAR-10-C accuracy: {acc:.1%}')"
  },
  {
    "path": "transopt/benchmark/HPO/HPO.py",
    "content": "import collections\nimport os\nimport random\nimport time\nimport json\nfrom typing import Dict, Union\nfrom tqdm import tqdm\n\nimport numpy as np\nimport torch\nfrom torch.utils.data import DataLoader\nimport matplotlib.pyplot as plt\n\nfrom transopt.benchmark.HPO import datasets\n\nimport transopt.benchmark.HPO.misc as misc\nfrom transopt.agent.registry import problem_registry\nfrom transopt.benchmark.HPO.fast_data_loader import (FastDataLoader,\n                                                     InfiniteDataLoader)\nfrom transopt.benchmark.problem_base.non_tab_problem import NonTabularProblem\nfrom transopt.space.fidelity_space import FidelitySpace\nfrom transopt.space.search_space import SearchSpace\nfrom transopt.space.variable import *\nfrom transopt.benchmark.HPO import algorithms\nfrom transopt.benchmark.HPO.hparams_registry import get_hparam_space\nfrom transopt.benchmark.HPO.networks import SUPPORTED_ARCHITECTURES\n\nclass HPO_base(NonTabularProblem):\n    problem_type = 'hpo'\n    num_variables = 4\n    num_objectives = 1\n    workloads = []\n    fidelity = None\n    \n    ALGORITHMS = [\n        'ERM',\n        # 'BayesianNN',\n        # 'GLMNet'\n    ]\n    \n    ARCHITECTURES = SUPPORTED_ARCHITECTURES\n    \n    DATASETS = [\n    \"RobCifar10\",\n    # \"RobCifar100\",\n    # \"RobImageNet\",\n    ]\n\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, algorithm, architecture, model_size, **kwargs\n        ):\n        \n        # Check if algorithm is valid\n        if algorithm not in HPO_base.ALGORITHMS:\n            raise ValueError(f\"Invalid algorithm: {algorithm}. Must be one of {HPO_base.ALGORITHMS}\")\n        self.algorithm_name = algorithm\n\n        # Check if workload is valid\n        if workload < 0 or workload >= len(HPO_base.DATASETS):\n            raise ValueError(f\"Invalid workload: {workload}. Must be between 0 and {len(HPO_base.DATASETS) - 1}\")\n        self.dataset_name = HPO_base.DATASETS[workload]\n\n        # Check if architecture is valid\n        if architecture not in HPO_base.ARCHITECTURES:\n            raise ValueError(f\"Invalid architecture: {architecture}. Must be one of {list(HPO_base.ARCHITECTURES.keys())}\")\n        if model_size not in HPO_base.ARCHITECTURES[architecture]:\n            raise ValueError(f\"Invalid model_size: {model_size} for architecture: {architecture}. Must be one of {HPO_base.ARCHITECTURES[architecture]}\")\n        self.architecture = architecture\n        self.model_size = model_size\n        \n        self.hpo_optimizer = kwargs.get('optimizer', 'random')\n\n        super(HPO_base, self).__init__(\n            task_name=task_name,\n            budget=budget,\n            budget_type=budget_type,\n            seed=seed,\n            workload=workload,\n        )\n        \n        self.query_counter = kwargs.get('query_counter', 0)\n        self.trial_seed = seed\n        self.hparams = {}\n        \n        base_dir = kwargs.get('base_dir', os.path.expanduser('~'))\n        print(base_dir)\n        self.data_dir = os.path.join(base_dir, 'transopt_tmp/data/')\n        self.model_save_dir  = os.path.join(base_dir, f'transopt_tmp/output/models/{self.hpo_optimizer}_{self.algorithm_name}_{self.architecture}_{self.model_size}_{self.dataset_name}_{seed}/')\n        self.results_save_dir  = os.path.join(base_dir, f'transopt_tmp/output/results/{self.hpo_optimizer}_{self.algorithm_name}_{self.architecture}_{self.model_size}_{self.dataset_name}_{seed}/')\n        \n        print(f\"Selected algorithm: {self.algorithm_name}, dataset: {self.dataset_name}\")\n        print(f\"Model architecture: {self.architecture}\")\n        if hasattr(self, 'model_size'):\n            print(f\"Model size: {self.model_size}\")\n        else:\n            print(\"Model size not specified\")\n        \n        os.makedirs(self.model_save_dir, exist_ok=True)\n        os.makedirs(self.results_save_dir, exist_ok=True)\n\n        random.seed(seed)\n        np.random.seed(seed)\n        torch.manual_seed(seed)\n        \n        # Get the GPU ID from hparams, default to 0 if not specified\n        gpu_id = kwargs.get('gpu_id', 0)\n        \n        if torch.cuda.is_available():\n            # Check if the specified GPU exists\n            if gpu_id < torch.cuda.device_count():\n                self.device = torch.device(f\"cuda:{gpu_id}\")\n            else:\n                print(f\"Warning: GPU {gpu_id} not found. Defaulting to CPU.\")\n                self.device = torch.device(\"cpu\")\n        else:\n            self.device = torch.device(\"cpu\")\n        \n        print(f\"Using device: {self.device}\")\n        \n        # 将最终使用的设备写入hparams\n        self.hparams['device'] = str(self.device)\n        \n        print(f\"Using device: {self.device}\")\n        \n        if self.dataset_name in vars(datasets):\n            self.dataset = vars(datasets)[self.dataset_name](root=self.data_dir, augment=kwargs.get('augment', None))\n        else:\n            raise NotImplementedError\n        if self.hparams.get('augment', None) == 'mixup':\n            self.mixup = True\n        else:\n            self.mixup = False\n        \n        print(f\"Using augment: {kwargs.get('augment', None)}\")\n        \n        self.eval_loaders, self.eval_loader_names = self.create_test_loaders(128)\n\n\n        self.checkpoint_vals = collections.defaultdict(lambda: [])\n        \n    def create_train_loaders(self, batch_size):\n        if not hasattr(self, 'dataset') or self.dataset is None:\n            raise ValueError(\"Dataset not initialized. Please ensure self.dataset is set before calling this method.\")\n        \n        train_loaders = FastDataLoader(\n            dataset=self.dataset.datasets['train'],\n            batch_size=batch_size,\n            num_workers=2)  # Assuming N_WORKERS is 2, adjust if needed\n        \n        val_loaders = FastDataLoader(\n            dataset=self.dataset.datasets['val'],\n            batch_size=batch_size,\n            num_workers=2)  # Assuming N_WORKERS is 2, adjust if needed\n\n        return train_loaders, val_loaders\n    \n\n    def create_test_loaders(self, batch_size):\n        if not hasattr(self, 'dataset') or self.dataset is None:\n            raise ValueError(\"Dataset not initialized. Please ensure self.dataset is set before calling this method.\")\n        \n        eval_loaders = []\n        eval_loader_names = []\n\n        # Get all available test set names\n        available_test_sets = self.dataset.get_available_test_set_names()\n\n        for test_set_name in available_test_sets:\n            if test_set_name.startswith('test_'):\n                eval_loaders.append(FastDataLoader(\n                    dataset=self.dataset.datasets[test_set_name],\n                    batch_size=batch_size,\n                    num_workers=2))  # Assuming N_WORKERS is 2, adjust if needed\n                eval_loader_names.append(test_set_name)\n\n        return eval_loaders, eval_loader_names\n    \n\n    def save_checkpoint(self, filename):\n        save_dict = {\n            \"model_input_shape\": self.dataset.input_shape,\n            \"model_num_classes\": self.dataset.num_classes,\n            \"model_hparams\": self.hparams,\n            \"model_dict\": self.algorithm.state_dict()\n        }\n        torch.save(save_dict, os.path.join(self.model_save_dir, filename))\n        \n    def get_configuration_space(\n        self, seed: Union[int, None] = None):\n\n        hparam_space = get_hparam_space(self.algorithm_name, self.model_size, self.architecture)\n        variables = []\n\n        for name, (hparam_type, range) in hparam_space.items():\n            if hparam_type == 'categorical':\n                variables.append(Categorical(name, range))\n            elif hparam_type == 'float':\n                variables.append(Continuous(name, range))\n            elif hparam_type == 'int':\n                variables.append(Integer(name, range))\n            elif hparam_type == 'log':\n                variables.append(LogContinuous(name, range))\n\n        ss = SearchSpace(variables)\n        return ss\n    \n    def get_fidelity_space(\n        self, seed: Union[int, None] = None):\n\n        fs = FidelitySpace([\n            Integer(\"epoch\", [1, 100])  # Adjust the range as needed\n        ])\n        return fs\n    \n    def train(self, configuration: dict):\n        \n        torch.backends.cudnn.deterministic = True\n        torch.backends.cudnn.benchmark = False\n        \n        self.epoches = configuration['epoch']\n        print(f\"Total epochs: {self.epoches}\")\n                \n        self.train_loader, self.val_loader = self.create_train_loaders(self.hparams['batch_size'])\n        \n        self.hparams['nonlinear_classifier'] = True\n    \n        for epoch in range(self.epoches):\n            epoch_start_time = time.time()\n            epoch_loss = 0.0\n            epoch_correct = 0\n            epoch_total = 0\n            \n            self.algorithm.train()\n            total_batches = len(self.train_loader)\n            for x, y in tqdm(self.train_loader, total=total_batches, desc=f\"Epoch {epoch+1}/{self.epoches}\", unit=\"batch\"):\n                step_start_time = time.time()\n                minibatches_device = [(x.to(self.device), y.to(self.device))]\n\n                step_vals = self.algorithm.update(minibatches_device)\n                self.checkpoint_vals['step_time'].append(time.time() - step_start_time)\n\n                for key, val in step_vals.items():\n                    self.checkpoint_vals[key].append(val)\n                \n                # Update epoch statistics\n                epoch_loss += step_vals.get('loss', 0.0)\n                epoch_correct += step_vals.get('correct', 0)\n                epoch_total += sum(len(x) for x, _ in minibatches_device)\n\n            # Compute and print epoch metrics\n            epoch_acc = epoch_correct / epoch_total if epoch_total > 0 else 0\n            epoch_loss /= len(self.train_loader)\n            print(f\"Epoch {epoch+1}/{self.epoches} - Loss: {epoch_loss:.4f}, Accuracy: {epoch_acc:.4f}\")\n\n        # Evaluate on validation set\n        val_acc = self.evaluate_loader(self.val_loader)\n\n        # Calculate final results after all epochs\n        results = {\n            'epoch': self.epoches,\n            'epoch_time': time.time() - epoch_start_time,\n            'train_loss': epoch_loss,\n            'train_acc': epoch_acc,\n            'val_acc': val_acc,\n        }\n\n        # Evaluate on all test loaders\n        for name, loader in zip(self.eval_loader_names, self.eval_loaders):\n            results[f'{name}_acc'] = self.evaluate_loader(loader)\n\n        # Calculate memory usage\n        results['mem_gb'] = torch.cuda.max_memory_allocated() / (1024.**3)\n\n        results['hparams'] = self.hparams\n        \n        return results\n\n    def save_epoch_results(self, results):\n        epoch_path = os.path.join(self.results_save_dir, f\"epoch_{results['epoch']}.json\")\n        with open(epoch_path, 'w') as f:\n            json.dump(results, f, indent=2)\n\n    def evaluate_loader(self, loader):\n        self.algorithm.eval()\n        correct = total = 0\n        with torch.no_grad():\n            for x, y in loader:\n                x, y = x.to(self.device), y.to(self.device)\n                p = self.algorithm.predict(x)\n                correct += (p.argmax(1).eq(y) if p.size(1) != 1 else p.gt(0).eq(y)).float().sum().item()\n                total += len(x)\n        self.algorithm.train()\n        return correct / total\n\n    def get_score(self, configuration: dict):\n        for key, value in configuration.items():\n            self.hparams[key] = value\n        \n        algorithm_class = algorithms.get_algorithm_class(self.algorithm_name)\n        self.algorithm = algorithm_class(self.dataset.input_shape, self.dataset.num_classes, self.architecture, self.model_size, self.mixup, self.device, self.hparams)\n        self.algorithm.to(self.device)\n        \n        self.query_counter += 1\n        results = self.train(configuration)\n        \n        # Construct filename with query and all hyperparameters\n        filename_parts = [f\"{self.query_counter}\"]\n        for key, value in configuration.items():\n            filename_parts.append(f\"{key}_{value}\")\n        filename = \"_\".join(filename_parts)\n\n        # Save results\n        epochs_path = os.path.join(self.results_save_dir, f\"{filename}.jsonl\")\n        with open(epochs_path, 'w') as f:\n            json.dump(results, f, indent=2)\n        \n        # Save final checkpoint and mark as done\n        self.save_checkpoint(f\"{filename}_model.pkl\")\n        with open(os.path.join(self.model_save_dir, 'done'), 'w') as f:\n            f.write('done')\n\n        val_acc = results['val_acc']\n        \n        return val_acc, results\n        \n\n    def objective_function(\n        self,\n        configuration,\n        fidelity = None,\n        seed = None,\n        **kwargs\n    ) -> Dict:\n\n        if fidelity is None:\n            fidelity = {\"epoch\": 50}\n        \n        # Convert log scale values back to normal scale\n        c = self.configuration_space.map_to_design_space(configuration)\n        \n        # Add fidelity (epoch) to the configuration\n        c[\"epoch\"] = fidelity[\"epoch\"]        \n        c['class_balanced'] = True\n        c['nonlinear_classifier'] = True\n        \n        val_acc, results = self.get_score(c)\n\n        acc = {list(self.objective_info.keys())[0]: float(val_acc)}\n        \n        # Add standard test accuracy\n        acc['test_standard_acc'] = float(results['test_standard_acc'])\n        \n        # Calculate average of other test accuracies\n        other_test_accs = [v for k, v in results.items() if k.startswith('test_') and k != 'test_standard_acc']\n        if other_test_accs:\n            acc['test_robust_acc'] = float(sum(other_test_accs) / len(other_test_accs))\n        \n        \n        return acc\n    \n    def get_objectives(self) -> Dict:\n        return {'function_value': 'minimize'}\n    \n    def get_problem_type(self):\n        return \"hpo\"\n\n\n@problem_registry.register(\"HPO_ERM\")\nclass HPO_ERM(HPO_base):    \n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n        ):            \n        algorithm = kwargs.pop('algorithm', 'ERM')\n        architecture = kwargs.pop('architecture', 'resnet')\n        model_size = kwargs.pop('model_size', 18)\n        optimizer = kwargs.pop('optimizer', 'random')\n        base_dir = kwargs.pop('base_dir', os.path.expanduser('~'))\n        \n        super(HPO_ERM, self).__init__(\n            task_name=task_name, \n            budget_type=budget_type, \n            budget=budget, \n            seed=seed, \n            workload=workload, \n            algorithm=algorithm, \n            architecture=architecture, \n            model_size=model_size,\n            optimizer=optimizer,\n            base_dir=base_dir,\n            **kwargs\n        )\n\ndef test_all_combinations():\n    print(\"Testing all combinations of architectures, algorithms, and datasets...\")\n    \n    for architecture in HPO_base.ARCHITECTURES:\n        for model_size in HPO_base.ARCHITECTURES[architecture]:\n            for algorithm in HPO_base.ALGORITHMS:\n                for dataset_index, dataset in enumerate(HPO_base.DATASETS):\n                    print(f\"Testing {architecture}-{model_size} with {algorithm} on {dataset}...\")\n                    try:\n                        # Create an instance of HPO_base\n                        hpo = HPO_base(task_name='test_combination', \n                                       budget_type='FEs', budget=100, seed=0, \n                                       workload=dataset_index, algorithm=algorithm, \n                                       architecture=architecture, model_size=model_size, optimizer='test_combination')\n                        \n                        # Get the configuration space\n                        config_space = hpo.get_configuration_space()\n                        \n                        # Get the fidelity space\n                        fidelity_space = hpo.get_fidelity_space()\n                        \n                        # Sample a random configuration\n                        config = {}\n                        for name, var in config_space.get_design_variables().items():\n                            if isinstance(var, Integer):\n                                config[name] = np.random.randint(var.search_space_range[0], var.search_space_range[1] + 1)\n                            elif isinstance(var, Continuous) or isinstance(var, LogContinuous):\n                                config[name] = np.random.uniform(var.search_space_range[0], var.search_space_range[1])\n                            elif isinstance(var, Categorical):\n                                config[name] = np.random.choice(var.search_space_range)\n                                \n                        \n                        \n                        # Sample a random fidelity\n                        fidelity = {}\n                        for name, var in fidelity_space.get_fidelity_range().items():\n                            if isinstance(var, Integer):\n                                fidelity[name] = np.random.randint(var.search_space_range[0], var.search_space_range[1] + 1)\n                            elif isinstance(var, Continuous):\n                                fidelity[name] = np.random.uniform(var.search_space_range[0], var.search_space_range[1])\n                            elif isinstance(var, Categorical):\n                                fidelity[name] = np.random.choice(var.search_space_range)\n                        \n                        # Set a small epoch for quick testing\n                        fidelity['epoch'] = 2\n                        \n                        # Run the objective function\n                        result = hpo.objective_function(configuration=config, fidelity=fidelity)\n                        \n                        print(f\"Configuration: {config}\")\n                        print(f\"Fidelity: {fidelity}\")\n                        print(f\"Result: {result}\")\n                        \n                        assert list(hpo.get_objectives().keys())[0] in result, f\"Result should contain '{list(hpo.get_objectives().keys())[0]}'\"\n                        assert 0 <= result[list(hpo.get_objectives().keys())[0]] <= 1, f\"{list(hpo.get_objectives().keys())[0]} should be between 0 and 1\"\n                        \n                        print(f\"Test passed for {architecture}-{model_size} with {algorithm} on {dataset}!\")\n                        print(\"--------------------\")\n                    except Exception as e:\n                        print(f\"Error occurred during test for {architecture}-{model_size} with {algorithm} on {dataset}: {str(e)}\")\n                        import traceback\n                        traceback.print_exc()\n                        print(\"--------------------\")\n\nif __name__ == \"__main__\":\n    import torch\n    import numpy as np\n\n    # Set random seed for reproducibility\n    np.random.seed(0)\n    torch.manual_seed(0)\n    \n    # Run the comprehensive test\n    try:\n        test_all_combinations()\n    except Exception as e:\n        print(f\"Error occurred during HPO_ERM test: {str(e)}\")\n        import traceback\n        traceback.print_exc()\n\n\n\n"
  },
  {
    "path": "transopt/benchmark/HPO/HPOAdaBoost.py",
    "content": "import os\nimport time\nimport logging\nimport torch\nimport numpy as np\nimport xgboost as xgb\nfrom typing import Union, Tuple, Dict, List\nfrom sklearn import pipeline\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.metrics import accuracy_score, make_scorer\nfrom sklearn.preprocessing import OneHotEncoder\n\nfrom transopt.utils.openml_data_manager import OpenMLHoldoutDataManager\nfrom transopt.space.variable import *\nfrom transopt.agent.registry import problem_registry\nfrom transopt.benchmark.problem_base.non_tab_problem import NonTabularProblem\nfrom transopt.space.search_space import SearchSpace\nfrom transopt.space.fidelity_space import FidelitySpace\n\nfrom transopt.optimizer.sampler.random import RandomSampler\n\nos.environ['OMP_NUM_THREADS'] = \"1\"\nlogger = logging.getLogger('XGBBenchmark')\n\n\n@problem_registry.register('AdaBoost')\nclass XGBoostBenchmark(NonTabularProblem):\n    task_lists = [167149, 167152, 126029, 167178, 167177, 167153, 167154, 167155, 167156]\n    problem_type = 'hpo'\n    num_variables = 10\n    num_objectives = 1\n    workloads = []\n    fidelity = None\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        \"\"\"\n\n        Parameters\n        ----------\n        task_id : int, None\n        n_threads  : int, None\n        seed : np.random.RandomState, int, None\n        \"\"\"\n        super(XGBoostBenchmark, self).__init__(\n            task_name=task_name,\n            budget=budget,\n            budget_type=budget_type,\n            seed=seed,\n            workload=workload,\n        )\n        self.task_id = XGBoostBenchmark.task_lists[workload]\n        if torch.cuda.is_available():\n            self.device = torch.device('cuda')\n        else:\n            self.device = torch.device('cpu')\n        self.n_threads = 1\n        self.budget = budget\n        self.accuracy_scorer = make_scorer(accuracy_score)\n\n        self.x_train, self.y_train, self.x_valid, self.y_valid, self.x_test, self.y_test, variable_types = \\\n            self.get_data()\n        self.categorical_data = np.array([var_type == 'categorical' for var_type in variable_types])\n\n        # XGB needs sorted data. Data should be (Categorical + numerical) not mixed.\n        categorical_idx = np.argwhere(self.categorical_data)\n        continuous_idx = np.argwhere(~self.categorical_data)\n        sorting = np.concatenate([categorical_idx, continuous_idx]).squeeze()\n        self.categorical_data = self.categorical_data[sorting]\n        self.x_train = self.x_train[:, sorting]\n        self.x_valid = self.x_valid[:, sorting]\n        self.x_test = self.x_test[:, sorting]\n\n        nan_columns = np.all(np.isnan(self.x_train), axis=0)\n        self.categorical_data = self.categorical_data[~nan_columns]\n\n        self.x_train, self.x_valid, self.x_test, self.categories = \\\n            OpenMLHoldoutDataManager.replace_nans_in_cat_columns(self.x_train, self.x_valid, self.x_test,\n                                                                 is_categorical=self.categorical_data)\n\n        # Determine the number of categories in the labels.\n        # In case of binary classification ``self.num_class`` has to be 1 for xgboost.\n        self.num_class = len(np.unique(np.concatenate([self.y_train, self.y_test, self.y_valid])))\n        self.num_class = 1 if self.num_class == 2 else self.num_class\n\n        self.train_idx = np.random.choice(a=np.arange(len(self.x_train)),\n                                         size=len(self.x_train),\n                                         replace=False)\n\n        # Similar to [Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets]\n        # (https://arxiv.org/pdf/1605.07079.pdf),\n        # use 10 time the number of classes as lower bound for the dataset fraction\n        n_classes = np.unique(self.y_train).shape[0]\n        self.lower_bound_train_size = (10 * n_classes) / self.x_train.shape[0]\n\n    def get_data(self) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray, List]:\n        \"\"\" Loads the data given a task or another source. \"\"\"\n\n        assert self.task_id is not None, NotImplementedError('No task-id given. Please either specify a task-id or '\n                                                             'overwrite the get_data Method.')\n\n        data_manager = OpenMLHoldoutDataManager(openml_task_id=self.task_id, rng=self.seed)\n        x_train, y_train, x_val, y_val, x_test, y_test = data_manager.load()\n\n        return x_train, y_train, x_val, y_val, x_test, y_test, data_manager.variable_types\n\n    def shuffle_data(self, seed=None):\n        \"\"\" Reshuffle the training data. If 'rng' is None, the training idx are shuffled according to the\n        class-random-state\"\"\"\n        random_state = seed\n        random_state.shuffle(self.train_idx)\n\n    # pylint: disable=arguments-differ\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        \"\"\"\n        Trains a XGBoost model given a hyperparameter configuration and\n        evaluates the model on the validation set.\n\n        Parameters\n        ----------\n        configuration : Dict, CS.Configuration\n            Configuration for the XGBoost model\n        fidelity: Dict, None\n            Fidelity parameters for the XGBoost model, check get_fidelity_space(). Uses default (max) value if None.\n        shuffle : bool\n            If ``True``, shuffle the training idx. If no parameter ``rng`` is given, use the class random state.\n            Defaults to ``False``.\n        rng : np.random.RandomState, int, None,\n            Random seed for benchmark. By default the class level random seed.\n\n            To prevent overfitting on a single seed, it is possible to pass a\n            parameter ``rng`` as 'int' or 'np.random.RandomState' to this function.\n            If this parameter is not given, the default random state is used.\n        kwargs\n\n        Returns\n        -------\n        Dict -\n            function_value : validation loss\n            cost : time to train and evaluate the model\n            info : Dict\n                train_loss : trainings loss\n                fidelity : used fidelities in this evaluation\n        \"\"\"\n        self.seed = seed\n\n        # if shuffle:\n        #     self.shuffle_data(self.seed)\n\n        start = time.time()\n\n        model = self._get_pipeline(**configuration)\n        model.fit(X=self.x_train, y=self.y_train)\n\n        train_loss = 1 - self.accuracy_scorer(model, self.x_train, self.y_train)\n        val_loss = 1 - self.accuracy_scorer(model, self.x_valid, self.y_valid)\n        cost = time.time() - start\n\n        # return {'function_value': float(val_loss),\n        #         'cost': cost,\n        #         'info': {'train_loss': float(train_loss),\n        #                  'fidelity': fidelity}\n        #         }\n        results = {list(self.objective_info.keys())[0]: float(val_loss)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    # pylint: disable=arguments-differ\n    def objective_function_test(self, configuration: Union[Dict],\n                                fidelity: Union[Dict, None] = None,\n                                shuffle: bool = False,\n                                seed: Union[np.random.RandomState, int, None] = None, **kwargs) -> Dict:\n        \"\"\"\n        Trains a XGBoost model with a given configuration on both the train\n        and validation data set and evaluates the model on the test data set.\n\n        Parameters\n        ----------\n        configuration : Dict, CS.Configuration\n            Configuration for the XGBoost Model\n        fidelity: Dict, None\n            Fidelity parameters, check get_fidelity_space(). Uses default (max) value if None.\n        shuffle : bool\n            If ``True``, shuffle the training idx. If no parameter ``rng`` is given, use the class random state.\n            Defaults to ``False``.\n        rng : np.random.RandomState, int, None,\n            Random seed for benchmark. By default the class level random seed.\n            To prevent overfitting on a single seed, it is possible to pass a\n            parameter ``rng`` as 'int' or 'np.random.RandomState' to this function.\n            If this parameter is not given, the default random state is used.\n        kwargs\n\n        Returns\n        -------\n        Dict -\n            function_value : test loss\n            cost : time to train and evaluate the model\n            info : Dict\n                fidelity : used fidelities in this evaluation\n        \"\"\"\n        default_dataset_fraction = self.get_fidelity_space().get_hyperparameter('dataset_fraction').default_value\n        if fidelity['dataset_fraction'] != default_dataset_fraction:\n            raise NotImplementedError(f'Test error can not be computed for dataset_fraction <= '\n                                      f'{default_dataset_fraction}')\n\n        self.seed = seed\n\n        if shuffle:\n            self.shuffle_data(self.seed)\n\n        start = time.time()\n\n        # Impute potential nan values with the feature-\n        data = np.concatenate((self.x_train, self.x_valid))\n        targets = np.concatenate((self.y_train, self.y_valid))\n\n        model = self._get_pipeline(**configuration)\n        model.fit(X=data, y=targets)\n\n        test_loss = 1 - self.accuracy_scorer(model, self.x_test, self.y_test)\n        cost = time.time() - start\n\n        return {'function_value': float(test_loss),\n                'cost': cost,\n                'info': {'fidelity': fidelity}}\n\n    def get_configuration_space(self, seed: Union[int, None] = None):\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all parameters for\n        the XGBoost Model\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n\n        variables=[Continuous('eta', [-10.0, 0.0]),\n                   Integer('max_depth', [1, 15]),\n                   Continuous('min_child_weight', [0.0, 7.0]),\n                   Continuous('colsample_bytree', [0.01, 1.0]),\n                   Continuous('colsample_bylevel', [0.01, 1.0]),\n                   Continuous('reg_lambda', [-10.0, 10.0]),\n                   Continuous('reg_alpha', [-10.0, 10.0]),\n                   Continuous('subsample_per_it', [0.1, 1.0]),\n                   Integer('n_estimators', [1, 50]),\n                   Continuous('gamma', [0.0, 1.0])]\n        ss = SearchSpace(variables)\n        return ss\n\n\n    def get_fidelity_space(self, seed: Union[int, None] = None):\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all fidelity parameters for\n        the XGBoost Benchmark\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n        # seed = seed if seed is not None else np.random.randint(1, 100000)\n        # fidel_space = CS.ConfigurationSpace(seed=seed)\n\n        # fidel_space.add_hyperparameters([\n        #     CS.UniformFloatHyperparameter(\"dataset_fraction\", lower=0.0, upper=1.0, default_value=1.0, log=False),\n        #     CS.UniformIntegerHyperparameter(\"n_estimators\", lower=1, upper=256, default_value=256, log=False)\n        # ])\n\n        # return fidel_space\n        fs = FidelitySpace([])\n        return fs\n\n\n    def get_meta_information(self) -> Dict:\n        \"\"\" Returns the meta information for the benchmark \"\"\"\n        return {'name': 'XGBoost',\n                'references': ['@article{probst2019tunability,'\n                               'title={Tunability: Importance of hyperparameters of machine learning algorithms.},'\n                               'author={Probst, Philipp and Boulesteix, Anne-Laure and Bischl, Bernd},'\n                               'journal={J. Mach. Learn. Res.},'\n                               'volume={20},'\n                               'number={53},'\n                               'pages={1--32},'\n                               'year={2019}'\n                               '}'],\n                'code': 'https://github.com/automl/HPOlib1.5/blob/development/hpolib/benchmarks/ml/'\n                        'xgboost_benchmark_old.py',\n                'shape of train data': self.x_train.shape,\n                'shape of test data': self.x_test.shape,\n                'shape of valid data': self.x_valid.shape,\n                'initial random seed': self.seed,\n                'task_id': self.task_id\n                }\n\n    def _get_pipeline(self, max_depth: int, eta: float, min_child_weight: int,\n                      colsample_bytree: float, colsample_bylevel: float, reg_lambda: int, reg_alpha: int,\n                      n_estimators: int, subsample_per_it: float, gamma: float) \\\n            -> pipeline.Pipeline:\n        \"\"\" Create the scikit-learn (training-)pipeline \"\"\"\n        objective = 'binary:logistic' if self.num_class <= 2 else 'multi:softmax'\n\n        if torch.cuda.is_available():\n            clf = pipeline.Pipeline([\n                ('preprocess_impute',\n                 ColumnTransformer([\n                     (\"categorical\", \"passthrough\", self.categorical_data),\n                     (\"continuous\", SimpleImputer(strategy=\"mean\"), ~self.categorical_data)])),\n                ('preprocess_one_hot',\n                 ColumnTransformer([\n                     (\"categorical\", OneHotEncoder(categories=self.categories, sparse=False), self.categorical_data),\n                     (\"continuous\", \"passthrough\", ~self.categorical_data)])),\n                ('xgb',\n                 xgb.XGBClassifier(\n                     max_depth=max_depth,\n                     learning_rate=np.exp2(eta),\n                     min_child_weight=np.exp2(min_child_weight),\n                     colsample_bytree=colsample_bytree,\n                     colsample_bylevel=colsample_bylevel,\n                     reg_alpha=np.exp2(reg_alpha),\n                     reg_lambda=np.exp2(reg_lambda),\n                     n_estimators=n_estimators,\n                     objective=objective,\n                     n_jobs=self.n_threads,\n                     random_state=self.seed,\n                     num_class=self.num_class,\n                     subsample=subsample_per_it,\n                     gamma=gamma,\n                     tree_method='gpu_hist',\n                     gpu_id=0\n                 ))\n            ])\n        else:\n            clf = pipeline.Pipeline([\n                ('preprocess_impute',\n                 ColumnTransformer([\n                     (\"categorical\", \"passthrough\", self.categorical_data),\n                     (\"continuous\", SimpleImputer(strategy=\"mean\"), ~self.categorical_data)])),\n                ('preprocess_one_hot',\n                 ColumnTransformer([\n                     (\"categorical\", OneHotEncoder(categories=self.categories), self.categorical_data),\n                     (\"continuous\", \"passthrough\", ~self.categorical_data)])),\n                ('xgb',\n                 xgb.XGBClassifier(\n                     max_depth=max_depth,\n                     learning_rate=np.exp2(eta),\n                     min_child_weight=np.exp2(min_child_weight),\n                     colsample_bytree=colsample_bytree,\n                     colsample_bylevel=colsample_bylevel,\n                     reg_alpha=np.exp2(reg_alpha),\n                     reg_lambda=np.exp2(reg_lambda),\n                     n_estimators=n_estimators,\n                     objective=objective,\n                     n_jobs=self.n_threads,\n                     random_state=self.seed,\n                     num_class=self.num_class,\n                     subsample=subsample_per_it,\n                     gamma = gamma,\n                     ))\n                ])\n\n        return clf\n\n    def get_objectives(self) -> Dict:\n        return {'train_loss': 'minimize'}\n    \n    def get_problem_type(self):\n        return \"hpo\"\n\n    # def get_var_range(self):\n    #     return {'eta':[-10,0], 'max_depth':[1, 15], 'min_child_weight':[0, 7], 'colsample_bytree':[0.01, 1.0], 'colsample_bylevel':[0.01, 1.0],\n    #             'reg_lambda':[-10, 10], 'reg_alpha':[-10, 10], 'subsample_per_it':[0.1, 1.0], 'n_estimators':[1, 50], 'gamma':[0,1.0]}\n    #\n    #\n    # def get_var_type(self):\n    #     return {'eta':'exp2', 'max_depth':'int', 'min_child_weight':'exp2', 'colsample_bytree':'float','colsample_bylevel':'float',\n    #             'reg_lambda':'exp2', 'reg_alpha':'exp2', 'subsample_per_it':'float', 'n_estimators':'int', 'gamma':'float'}\n\n\n\n\nif __name__ == '__main__':\n    task_lists = [167149, 167152, 126029, 167178, 167177, 167153, 167154, 167155, 167156]\n    workload = 8\n    problem = XGBoostBenchmark(task_name='XGB', budget=20, budget_type = 'fes', workload=workload, seed = 0)\n    sampler = RandomSampler(3000, config=None)\n    space = problem.configuration_space\n    samples = sampler.sample(space,3000)\n    \n    parameters = [space.map_to_design_space(sample) for sample in samples]\n    import tqdm\n    for para_id in tqdm.tqdm(range(len(parameters))):\n        parameters[para_id]['score'] = problem.f(parameters[para_id])['train_loss']\n    import pandas as pd\n    \n\n    \n    df  = pd.DataFrame(parameters)\n    df.to_csv(f'XGB_{workload}.csv')\n\n    # a = problem.f({'eta':-0.2, 'max_depth':5, 'min_child_weight':2, 'colsample_bytree':0.4, 'colsample_bylevel':0.4,\n    #             'reg_lambda':0.5, 'reg_alpha':-0.2, 'subsample_per_it':0.7, 'n_estimators':20, 'gamma':0.9})\n    \n\n\n"
  },
  {
    "path": "transopt/benchmark/HPO/HPOSVM.py",
    "content": "import logging\nimport time\nimport numpy as np\nfrom scipy import sparse\nfrom typing import Union, Tuple, Dict, List\nfrom sklearn import pipeline\nfrom sklearn import svm\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.metrics import accuracy_score, make_scorer\nfrom sklearn.preprocessing import OneHotEncoder, MinMaxScaler\n\nfrom transopt.utils.openml_data_manager import OpenMLHoldoutDataManager\nfrom transopt.space.variable import *\nfrom transopt.agent.registry import problem_registry\nfrom transopt.benchmark.problem_base.non_tab_problem import NonTabularProblem\nfrom transopt.space.search_space import SearchSpace\nfrom transopt.space.fidelity_space import FidelitySpace\n\nlogger = logging.getLogger('SVMBenchmark')\n\n\n@problem_registry.register('SVM')\nclass SupportVectorMachine(NonTabularProblem):\n    \"\"\"\n    Hyperparameter optimization task to optimize the regularization\n    parameter C and the kernel parameter gamma of a support vector machine.\n    Both hyperparameters are optimized on a log scale in [-10, 10].\n    The X_test data set is only used for a final offline evaluation of\n    a configuration. For that the validation and training data is\n    concatenated to form the whole training data set.\n    \"\"\"\n    task_lists = [167149, 167152, 167183, 126025, 126029, 167161, 167169,\n                  167178, 167176, 167177]\n    problem_type = 'hpo'\n    num_variables = 2\n    num_objectives = 1\n    workloads = []\n    fidelity = None\n\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        \"\"\"\n        Parameters\n        ----------\n        task_id : int, None\n        rng : np.random.RandomState, int, None\n        \"\"\"\n        super(SupportVectorMachine, self).__init__(\n            task_name=task_name,\n            budget=budget,\n            budget_type=budget_type,\n            seed=seed,\n            workload=workload,\n        )\n        task_type='non-tabular'\n        self.task_id = SupportVectorMachine.task_lists[workload]\n        self.cache_size = 200  # Cache for the SVC in MB\n        self.accuracy_scorer = make_scorer(accuracy_score)\n\n        self.x_train, self.y_train, self.x_valid, self.y_valid, self.x_test, self.y_test, variable_types = \\\n            self.get_data()\n        self.categorical_data = np.array([var_type == 'categorical' for var_type in variable_types])\n\n        # Sort data (Categorical + numerical) so that categorical and continous are not mixed.\n        categorical_idx = np.argwhere(self.categorical_data)\n        continuous_idx = np.argwhere(~self.categorical_data)\n        sorting = np.concatenate([categorical_idx, continuous_idx]).squeeze()\n        self.categorical_data = self.categorical_data[sorting]\n        self.x_train = self.x_train[:, sorting]\n        self.x_valid = self.x_valid[:, sorting]\n        self.x_test = self.x_test[:, sorting]\n\n        nan_columns = np.all(np.isnan(self.x_train), axis=0)\n        self.categorical_data = self.categorical_data[~nan_columns]\n        self.x_train, self.x_valid, self.x_test, self.categories = \\\n            OpenMLHoldoutDataManager.replace_nans_in_cat_columns(self.x_train, self.x_valid, self.x_test,\n                                                                 is_categorical=self.categorical_data)\n\n        self.train_idx = np.random.choice(a=np.arange(len(self.x_train)),\n                                         size=len(self.x_train),\n                                         replace=False)\n\n        # Similar to [Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets]\n        # (https://arxiv.org/pdf/1605.07079.pdf),\n        # use 10 time the number of classes as lower bound for the dataset fraction\n        n_classes = np.unique(self.y_train).shape[0]\n        self.lower_bound_train_size = (10 * n_classes) / self.x_train.shape[0]\n\n    def get_data(self) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray, List]:\n        \"\"\" Loads the data given a task or another source. \"\"\"\n\n        assert self.task_id is not None, NotImplementedError('No task-id given. Please either specify a task-id or '\n                                                             'overwrite the get_data Method.')\n\n        data_manager = OpenMLHoldoutDataManager(openml_task_id=self.task_id, rng=self.seed)\n        x_train, y_train, x_val, y_val, x_test, y_test = data_manager.load()\n\n        return x_train, y_train, x_val, y_val, x_test, y_test, data_manager.variable_types\n\n    def shuffle_data(self, seed=None):\n        \"\"\" Reshuffle the training data. If 'rng' is None, the training idx are shuffled according to the\n        class-random-state\"\"\"\n        random_state = seed\n        random_state.shuffle(self.train_idx)\n\n    # pylint: disable=arguments-differ\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        \"\"\"\n        Trains a SVM model given a hyperparameter configuration and\n        evaluates the model on the validation set.\n\n        Parameters\n        ----------\n        configuration : Dict, CS.Configuration\n            Configuration for the SVM model\n        fidelity: Dict, None\n            Fidelity parameters for the SVM model, check get_fidelity_space(). Uses default (max) value if None.\n        shuffle : bool\n            If ``True``, shuffle the training idx. If no parameter ``rng`` is given, use the class random state.\n            Defaults to ``False``.\n        rng : np.random.RandomState, int, None,\n            Random seed for benchmark. By default the class level random seed.\n\n            To prevent overfitting on a single seed, it is possible to pass a\n            parameter ``rng`` as 'int' or 'np.random.RandomState' to this function.\n            If this parameter is not given, the default random state is used.\n        kwargs\n\n        Returns\n        -------\n        Dict -\n            function_value : validation loss\n            cost : time to train and evaluate the model\n            info : Dict\n                train_loss : training loss\n                fidelity : used fidelities in this evaluation\n        \"\"\"\n        start_time = time.time()\n\n        self.seed = seed\n\n        # if shuffle:\n        #     self.shuffle_data(self.seed)\n\n        # Transform hyperparameters to linear scale\n        hp_c = np.exp(float(configuration['C']))\n        hp_gamma = np.exp(float(configuration['gamma']))\n\n        # Train support vector machine\n        model = self.get_pipeline(hp_c, hp_gamma)\n        model.fit(self.x_train, self.y_train)\n\n        # Compute validation error\n        train_loss = 1 - self.accuracy_scorer(model, self.x_train, self.y_train)\n        val_loss = 1 - self.accuracy_scorer(model, self.x_valid, self.y_valid)\n\n        cost = time.time() - start_time\n\n        # return {'function_value': float(val_loss),\n        #         \"cost\": cost,\n        #         'info': {'train_loss': float(train_loss),\n        #                  'fidelity': fidelity}}\n\n        results = {list(self.objective_info.keys())[0]: float(val_loss)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    # pylint: disable=arguments-differ\n    def objective_function_test(self, configuration: Union[Dict],\n                                fidelity: Union[Dict, None] = None,\n                                shuffle: bool = False,\n                                seed: Union[np.random.RandomState, int, None] = None, **kwargs) -> Dict:\n        \"\"\"\n        Trains a SVM model with a given configuration on both the X_train\n        and validation data set and evaluates the model on the X_test data set.\n\n        Parameters\n        ----------\n        configuration : Dict, CS.Configuration\n            Configuration for the SVM Model\n        fidelity: Dict, None\n            Fidelity parameters, check get_fidelity_space(). Uses default (max) value if None.\n        shuffle : bool\n            If ``True``, shuffle the training idx. If no parameter ``rng`` is given, use the class random state.\n            Defaults to ``False``.\n        rng : np.random.RandomState, int, None,\n            Random seed for benchmark. By default the class level random seed.\n            To prevent overfitting on a single seed, it is possible to pass a\n            parameter ``rng`` as 'int' or 'np.random.RandomState' to this function.\n            If this parameter is not given, the default random state is used.\n        kwargs\n\n        Returns\n        -------\n        Dict -\n            function_value : X_test loss\n            cost : time to X_train and evaluate the model\n            info : Dict\n                train_valid_loss: Loss on the train+valid data set\n                fidelity : used fidelities in this evaluation\n        \"\"\"\n\n\n        self.seed = seed\n\n        if shuffle:\n            self.shuffle_data(self.seed)\n\n        start_time = time.time()\n\n        # Concatenate training and validation dataset\n        if isinstance(self.x_train, sparse.csr.csr_matrix) or isinstance(self.x_valid, sparse.csr.csr_matrix):\n            data = sparse.vstack((self.x_train, self.x_valid))\n        else:\n            data = np.concatenate((self.x_train, self.x_valid))\n        targets = np.concatenate((self.y_train, self.y_valid))\n\n        # Transform hyperparameters to linear scale\n        hp_c = np.exp(float(configuration['C']))\n        hp_gamma = np.exp(float(configuration['gamma']))\n\n        model = self.get_pipeline(hp_c, hp_gamma)\n        model.fit(data, targets)\n\n        # Compute validation error\n        train_valid_loss = 1 - self.accuracy_scorer(model, data, targets)\n\n        # Compute test error\n        test_loss = 1 - self.accuracy_scorer(model, self.x_test, self.y_test)\n\n        cost = time.time() - start_time\n\n        # return {'function_value': float(test_loss),\n        #         \"cost\": cost,\n        #         'info': {'train_valid_loss': float(train_valid_loss),\n        #                  'fidelity': fidelity}}\n\n        results = {list(self.objective_info.keys())[0]: float(test_loss)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n\n        return results\n\n    def get_pipeline(self, C: float, gamma: float) -> pipeline.Pipeline:\n        \"\"\" Create the scikit-learn (training-)pipeline \"\"\"\n\n        model = pipeline.Pipeline([\n            ('preprocess_impute',\n             ColumnTransformer([\n                 (\"categorical\", \"passthrough\", self.categorical_data),\n                 (\"continuous\", SimpleImputer(strategy=\"mean\"), ~self.categorical_data)])),\n            ('preprocess_one_hot',\n             ColumnTransformer([\n                 (\"categorical\", OneHotEncoder(categories=self.categories), self.categorical_data),\n                 (\"continuous\", MinMaxScaler(feature_range=(0, 1)), ~self.categorical_data)])),\n            ('svm',\n             svm.SVC(gamma=gamma, C=C, random_state=self.seed, cache_size=self.cache_size))\n        ])\n        return model\n\n    def get_configuration_space(self, seed: Union[int, None] = None):\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all parameters for\n        the SVM Model\n\n        For a detailed explanation of the hyperparameters:\n        https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n\n        variables=[Continuous('C', [-10, 10]), Continuous('gamma', [-10, 10])]\n        ss = SearchSpace(variables)\n        return ss\n\n\n    def get_fidelity_space(self, seed: Union[int, None] = None):\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all fidelity parameters for\n        the SupportVector Benchmark\n\n        Fidelities\n        ----------\n        dataset_fraction: float - [0.1, 1]\n            fraction of training data set to use\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n        # seed = seed if seed is not None else np.random.randint(1, 100000)\n        # fidel_space = CS.ConfigurationSpace(seed=seed)\n\n        # fidel_space.add_hyperparameters([\n        #     CS.UniformFloatHyperparameter(\"dataset_fraction\", lower=0.0, upper=1.0, default_value=1.0, log=False),\n        # ])\n\n        fs = FidelitySpace([])\n        return fs\n\n\n    def get_meta_information(self):\n        \"\"\" Returns the meta information for the benchmark \"\"\"\n        return {'name': 'Support Vector Machine',\n                'references': [\"@InProceedings{pmlr-v54-klein17a\",\n                               \"author = {Aaron Klein and Stefan Falkner and Simon Bartels and Philipp Hennig and \"\n                               \"Frank Hutter}, \"\n                               \"title = {{Fast Bayesian Optimization of Machine Learning Hyperparameters on \"\n                               \"Large Datasets}}\"\n                               \"pages = {528--536}, year = {2017},\"\n                               \"editor = {Aarti Singh and Jerry Zhu},\"\n                               \"volume = {54},\"\n                               \"series = {Proceedings of Machine Learning Research},\"\n                               \"address = {Fort Lauderdale, FL, USA},\"\n                               \"month = {20--22 Apr},\"\n                               \"publisher = {PMLR},\"\n                               \"pdf = {http://proceedings.mlr.press/v54/klein17a/klein17a.pdf}, \"\n                               \"url = {http://proceedings.mlr.press/v54/klein17a.html}, \"\n                               ],\n                'code': 'https://github.com/automl/HPOlib1.5/blob/container/hpolib/benchmarks/ml/svm_benchmark.py',\n                'shape of train data': self.x_train.shape,\n                'shape of test data': self.x_test.shape,\n                'shape of valid data': self.x_valid.shape,\n                'initial random seed': self.seed,\n                'task_id': self.task_id\n                }\n    \n    def get_objectives(self) -> Dict:\n        return {'train_loss': 'minimize'}\n    \n    def get_problem_type(self):\n        return \"hpo\"\n\nif __name__ == '__main__':\n    task_lists = [167149, 167152, 126029, 167178, 167177]\n    problem = SupportVectorMachine(task_name='svm',task_id=167149, seed=0, budget=10)\n    a = problem.f({'C':0.2, 'gamma':-0.3})\n    print(a)\n"
  },
  {
    "path": "transopt/benchmark/HPO/HPOXGBoost.py",
    "content": "import os\nimport time\nimport logging\nimport torch\nimport numpy as np\nimport xgboost as xgb\nfrom typing import Union, Tuple, Dict, List\nfrom sklearn import pipeline\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.metrics import accuracy_score, make_scorer\nfrom sklearn.preprocessing import OneHotEncoder\n\nfrom transopt.utils.openml_data_manager import OpenMLHoldoutDataManager\nfrom transopt.space.variable import *\nfrom transopt.agent.registry import problem_registry\nfrom transopt.benchmark.problem_base.non_tab_problem import NonTabularProblem\nfrom transopt.space.search_space import SearchSpace\nfrom transopt.space.fidelity_space import FidelitySpace\n\nfrom transopt.optimizer.sampler.random import RandomSampler\n\nos.environ['OMP_NUM_THREADS'] = \"1\"\nlogger = logging.getLogger('XGBBenchmark')\n\n\n@problem_registry.register('XGB')\nclass XGBoostBenchmark(NonTabularProblem):\n    task_lists = [167149, 167152, 126029, 167178, 167177, 167153, 167154, 167155, 167156]\n    problem_type = 'hpo'\n    num_variables = 10\n    num_objectives = 1\n    workloads = []\n    fidelity = None\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        \"\"\"\n\n        Parameters\n        ----------\n        task_id : int, None\n        n_threads  : int, None\n        seed : np.random.RandomState, int, None\n        \"\"\"\n        super(XGBoostBenchmark, self).__init__(\n            task_name=task_name,\n            budget=budget,\n            budget_type=budget_type,\n            seed=seed,\n            workload=workload,\n        )\n        self.task_id = XGBoostBenchmark.task_lists[workload]\n        if torch.cuda.is_available():\n            self.device = torch.device('cuda')\n        else:\n            self.device = torch.device('cpu')\n        self.n_threads = 1\n        self.budget = budget\n        self.accuracy_scorer = make_scorer(accuracy_score)\n\n        self.x_train, self.y_train, self.x_valid, self.y_valid, self.x_test, self.y_test, variable_types = \\\n            self.get_data()\n        self.categorical_data = np.array([var_type == 'categorical' for var_type in variable_types])\n\n        # XGB needs sorted data. Data should be (Categorical + numerical) not mixed.\n        categorical_idx = np.argwhere(self.categorical_data)\n        continuous_idx = np.argwhere(~self.categorical_data)\n        sorting = np.concatenate([categorical_idx, continuous_idx]).squeeze()\n        self.categorical_data = self.categorical_data[sorting]\n        self.x_train = self.x_train[:, sorting]\n        self.x_valid = self.x_valid[:, sorting]\n        self.x_test = self.x_test[:, sorting]\n\n        nan_columns = np.all(np.isnan(self.x_train), axis=0)\n        self.categorical_data = self.categorical_data[~nan_columns]\n\n        self.x_train, self.x_valid, self.x_test, self.categories = \\\n            OpenMLHoldoutDataManager.replace_nans_in_cat_columns(self.x_train, self.x_valid, self.x_test,\n                                                                 is_categorical=self.categorical_data)\n\n        # Determine the number of categories in the labels.\n        # In case of binary classification ``self.num_class`` has to be 1 for xgboost.\n        self.num_class = len(np.unique(np.concatenate([self.y_train, self.y_test, self.y_valid])))\n        self.num_class = 1 if self.num_class == 2 else self.num_class\n\n        self.train_idx = np.random.choice(a=np.arange(len(self.x_train)),\n                                         size=len(self.x_train),\n                                         replace=False)\n\n        # Similar to [Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets]\n        # (https://arxiv.org/pdf/1605.07079.pdf),\n        # use 10 time the number of classes as lower bound for the dataset fraction\n        n_classes = np.unique(self.y_train).shape[0]\n        self.lower_bound_train_size = (10 * n_classes) / self.x_train.shape[0]\n\n    def get_data(self) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray, List]:\n        \"\"\" Loads the data given a task or another source. \"\"\"\n\n        assert self.task_id is not None, NotImplementedError('No task-id given. Please either specify a task-id or '\n                                                             'overwrite the get_data Method.')\n\n        data_manager = OpenMLHoldoutDataManager(openml_task_id=self.task_id, rng=self.seed)\n        x_train, y_train, x_val, y_val, x_test, y_test = data_manager.load()\n\n        return x_train, y_train, x_val, y_val, x_test, y_test, data_manager.variable_types\n\n    def shuffle_data(self, seed=None):\n        \"\"\" Reshuffle the training data. If 'rng' is None, the training idx are shuffled according to the\n        class-random-state\"\"\"\n        random_state = seed\n        random_state.shuffle(self.train_idx)\n\n    # pylint: disable=arguments-differ\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        \"\"\"\n        Trains a XGBoost model given a hyperparameter configuration and\n        evaluates the model on the validation set.\n\n        Parameters\n        ----------\n        configuration : Dict, CS.Configuration\n            Configuration for the XGBoost model\n        fidelity: Dict, None\n            Fidelity parameters for the XGBoost model, check get_fidelity_space(). Uses default (max) value if None.\n        shuffle : bool\n            If ``True``, shuffle the training idx. If no parameter ``rng`` is given, use the class random state.\n            Defaults to ``False``.\n        rng : np.random.RandomState, int, None,\n            Random seed for benchmark. By default the class level random seed.\n\n            To prevent overfitting on a single seed, it is possible to pass a\n            parameter ``rng`` as 'int' or 'np.random.RandomState' to this function.\n            If this parameter is not given, the default random state is used.\n        kwargs\n\n        Returns\n        -------\n        Dict -\n            function_value : validation loss\n            cost : time to train and evaluate the model\n            info : Dict\n                train_loss : trainings loss\n                fidelity : used fidelities in this evaluation\n        \"\"\"\n        self.seed = seed\n\n        # if shuffle:\n        #     self.shuffle_data(self.seed)\n\n        start = time.time()\n\n        model = self._get_pipeline(**configuration)\n        model.fit(X=self.x_train, y=self.y_train)\n\n        train_loss = 1 - self.accuracy_scorer(model, self.x_train, self.y_train)\n        val_loss = 1 - self.accuracy_scorer(model, self.x_valid, self.y_valid)\n        cost = time.time() - start\n\n        # return {'function_value': float(val_loss),\n        #         'cost': cost,\n        #         'info': {'train_loss': float(train_loss),\n        #                  'fidelity': fidelity}\n        #         }\n        results = {list(self.objective_info.keys())[0]: float(val_loss)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    # pylint: disable=arguments-differ\n    def objective_function_test(self, configuration: Union[Dict],\n                                fidelity: Union[Dict, None] = None,\n                                shuffle: bool = False,\n                                seed: Union[np.random.RandomState, int, None] = None, **kwargs) -> Dict:\n        \"\"\"\n        Trains a XGBoost model with a given configuration on both the train\n        and validation data set and evaluates the model on the test data set.\n\n        Parameters\n        ----------\n        configuration : Dict, CS.Configuration\n            Configuration for the XGBoost Model\n        fidelity: Dict, None\n            Fidelity parameters, check get_fidelity_space(). Uses default (max) value if None.\n        shuffle : bool\n            If ``True``, shuffle the training idx. If no parameter ``rng`` is given, use the class random state.\n            Defaults to ``False``.\n        rng : np.random.RandomState, int, None,\n            Random seed for benchmark. By default the class level random seed.\n            To prevent overfitting on a single seed, it is possible to pass a\n            parameter ``rng`` as 'int' or 'np.random.RandomState' to this function.\n            If this parameter is not given, the default random state is used.\n        kwargs\n\n        Returns\n        -------\n        Dict -\n            function_value : test loss\n            cost : time to train and evaluate the model\n            info : Dict\n                fidelity : used fidelities in this evaluation\n        \"\"\"\n        default_dataset_fraction = self.get_fidelity_space().get_hyperparameter('dataset_fraction').default_value\n        if fidelity['dataset_fraction'] != default_dataset_fraction:\n            raise NotImplementedError(f'Test error can not be computed for dataset_fraction <= '\n                                      f'{default_dataset_fraction}')\n\n        self.seed = seed\n\n        if shuffle:\n            self.shuffle_data(self.seed)\n\n        start = time.time()\n\n        # Impute potential nan values with the feature-\n        data = np.concatenate((self.x_train, self.x_valid))\n        targets = np.concatenate((self.y_train, self.y_valid))\n\n        model = self._get_pipeline(**configuration)\n        model.fit(X=data, y=targets)\n\n        test_loss = 1 - self.accuracy_scorer(model, self.x_test, self.y_test)\n        cost = time.time() - start\n\n        return {'function_value': float(test_loss),\n                'cost': cost,\n                'info': {'fidelity': fidelity}}\n\n    def get_configuration_space(self, seed: Union[int, None] = None):\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all parameters for\n        the XGBoost Model\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n\n        variables=[Continuous('eta', [-10.0, 0.0]),\n                   Integer('max_depth', [1, 15]),\n                   Continuous('min_child_weight', [0.0, 7.0]),\n                   Continuous('colsample_bytree', [0.01, 1.0]),\n                   Continuous('colsample_bylevel', [0.01, 1.0]),\n                   Continuous('reg_lambda', [-10.0, 10.0]),\n                   Continuous('reg_alpha', [-10.0, 10.0]),\n                   Continuous('subsample_per_it', [0.1, 1.0]),\n                   Integer('n_estimators', [1, 50]),\n                   Continuous('gamma', [0.0, 1.0])]\n        ss = SearchSpace(variables)\n        return ss\n\n\n    def get_fidelity_space(self, seed: Union[int, None] = None):\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all fidelity parameters for\n        the XGBoost Benchmark\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n        # seed = seed if seed is not None else np.random.randint(1, 100000)\n        # fidel_space = CS.ConfigurationSpace(seed=seed)\n\n        # fidel_space.add_hyperparameters([\n        #     CS.UniformFloatHyperparameter(\"dataset_fraction\", lower=0.0, upper=1.0, default_value=1.0, log=False),\n        #     CS.UniformIntegerHyperparameter(\"n_estimators\", lower=1, upper=256, default_value=256, log=False)\n        # ])\n\n        # return fidel_space\n        fs = FidelitySpace([])\n        return fs\n\n\n    def get_meta_information(self) -> Dict:\n        \"\"\" Returns the meta information for the benchmark \"\"\"\n        return {'name': 'XGBoost',\n                'references': ['@article{probst2019tunability,'\n                               'title={Tunability: Importance of hyperparameters of machine learning algorithms.},'\n                               'author={Probst, Philipp and Boulesteix, Anne-Laure and Bischl, Bernd},'\n                               'journal={J. Mach. Learn. Res.},'\n                               'volume={20},'\n                               'number={53},'\n                               'pages={1--32},'\n                               'year={2019}'\n                               '}'],\n                'code': 'https://github.com/automl/HPOlib1.5/blob/development/hpolib/benchmarks/ml/'\n                        'xgboost_benchmark_old.py',\n                'shape of train data': self.x_train.shape,\n                'shape of test data': self.x_test.shape,\n                'shape of valid data': self.x_valid.shape,\n                'initial random seed': self.seed,\n                'task_id': self.task_id\n                }\n\n    def _get_pipeline(self, max_depth: int, eta: float, min_child_weight: int,\n                      colsample_bytree: float, colsample_bylevel: float, reg_lambda: int, reg_alpha: int,\n                      n_estimators: int, subsample_per_it: float, gamma: float) \\\n            -> pipeline.Pipeline:\n        \"\"\" Create the scikit-learn (training-)pipeline \"\"\"\n        objective = 'binary:logistic' if self.num_class <= 2 else 'multi:softmax'\n\n        if torch.cuda.is_available():\n            clf = pipeline.Pipeline([\n                ('preprocess_impute',\n                 ColumnTransformer([\n                     (\"categorical\", \"passthrough\", self.categorical_data),\n                     (\"continuous\", SimpleImputer(strategy=\"mean\"), ~self.categorical_data)])),\n                ('preprocess_one_hot',\n                 ColumnTransformer([\n                     (\"categorical\", OneHotEncoder(categories=self.categories, sparse=False), self.categorical_data),\n                     (\"continuous\", \"passthrough\", ~self.categorical_data)])),\n                ('xgb',\n                 xgb.XGBClassifier(\n                     max_depth=max_depth,\n                     learning_rate=np.exp2(eta),\n                     min_child_weight=np.exp2(min_child_weight),\n                     colsample_bytree=colsample_bytree,\n                     colsample_bylevel=colsample_bylevel,\n                     reg_alpha=np.exp2(reg_alpha),\n                     reg_lambda=np.exp2(reg_lambda),\n                     n_estimators=n_estimators,\n                     objective=objective,\n                     n_jobs=self.n_threads,\n                     random_state=self.seed,\n                     num_class=self.num_class,\n                     subsample=subsample_per_it,\n                     gamma=gamma,\n                     tree_method='gpu_hist',\n                     gpu_id=0\n                 ))\n            ])\n        else:\n            clf = pipeline.Pipeline([\n                ('preprocess_impute',\n                 ColumnTransformer([\n                     (\"categorical\", \"passthrough\", self.categorical_data),\n                     (\"continuous\", SimpleImputer(strategy=\"mean\"), ~self.categorical_data)])),\n                ('preprocess_one_hot',\n                 ColumnTransformer([\n                     (\"categorical\", OneHotEncoder(categories=self.categories), self.categorical_data),\n                     (\"continuous\", \"passthrough\", ~self.categorical_data)])),\n                ('xgb',\n                 xgb.XGBClassifier(\n                     max_depth=max_depth,\n                     learning_rate=np.exp2(eta),\n                     min_child_weight=np.exp2(min_child_weight),\n                     colsample_bytree=colsample_bytree,\n                     colsample_bylevel=colsample_bylevel,\n                     reg_alpha=np.exp2(reg_alpha),\n                     reg_lambda=np.exp2(reg_lambda),\n                     n_estimators=n_estimators,\n                     objective=objective,\n                     n_jobs=self.n_threads,\n                     random_state=self.seed,\n                     num_class=self.num_class,\n                     subsample=subsample_per_it,\n                     gamma = gamma,\n                     ))\n                ])\n\n        return clf\n\n    def get_objectives(self) -> Dict:\n        return {'train_loss': 'minimize'}\n    \n    def get_problem_type(self):\n        return \"hpo\"\n\n    # def get_var_range(self):\n    #     return {'eta':[-10,0], 'max_depth':[1, 15], 'min_child_weight':[0, 7], 'colsample_bytree':[0.01, 1.0], 'colsample_bylevel':[0.01, 1.0],\n    #             'reg_lambda':[-10, 10], 'reg_alpha':[-10, 10], 'subsample_per_it':[0.1, 1.0], 'n_estimators':[1, 50], 'gamma':[0,1.0]}\n    #\n    #\n    # def get_var_type(self):\n    #     return {'eta':'exp2', 'max_depth':'int', 'min_child_weight':'exp2', 'colsample_bytree':'float','colsample_bylevel':'float',\n    #             'reg_lambda':'exp2', 'reg_alpha':'exp2', 'subsample_per_it':'float', 'n_estimators':'int', 'gamma':'float'}\n\n\n\n\nif __name__ == '__main__':\n    task_lists = [167149, 167152, 126029, 167178, 167177, 167153, 167154, 167155, 167156]\n    workload = 8\n    problem = XGBoostBenchmark(task_name='XGB', budget=20, budget_type = 'fes', workload=workload, seed = 0)\n    sampler = RandomSampler(3000, config=None)\n    space = problem.configuration_space\n    samples = sampler.sample(space,3000)\n    \n    parameters = [space.map_to_design_space(sample) for sample in samples]\n    import tqdm\n    for para_id in tqdm.tqdm(range(len(parameters))):\n        parameters[para_id]['score'] = problem.f(parameters[para_id])['train_loss']\n    import pandas as pd\n    \n\n    \n    df  = pd.DataFrame(parameters)\n    df.to_csv(f'XGB_{workload}.csv')\n\n    # a = problem.f({'eta':-0.2, 'max_depth':5, 'min_child_weight':2, 'colsample_bytree':0.4, 'colsample_bylevel':0.4,\n    #             'reg_lambda':0.5, 'reg_alpha':-0.2, 'subsample_per_it':0.7, 'n_estimators':20, 'gamma':0.9})\n    \n\n\n"
  },
  {
    "path": "transopt/benchmark/HPO/__init__.py",
    "content": "from transopt.benchmark.HPO.HPOSVM import SupportVectorMachine\nfrom transopt.benchmark.HPO.HPOXGBoost import XGBoostBenchmark\n\n\nfrom transopt.benchmark.HPOOOD.hpoood import ERMOOD"
  },
  {
    "path": "transopt/benchmark/HPO/algorithms.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nimport copy\nfrom collections import OrderedDict\n\nimport numpy as np\nimport torch\nimport torch.autograd as autograd\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\nimport torchvision.models\nfrom transopt.benchmark.HPO import networks\nfrom sklearn.linear_model import SGDClassifier\nimport pyro\nimport pyro.distributions as dist\nfrom pyro.infer import SVI, Trace_ELBO\nfrom pyro.optim import SGD\n\nfrom transopt.benchmark.HPO.augmentation import mixup_data, mixup_criterion\n\nALGORITHMS = [\n    'ERM',\n    'GLMNet',\n    'BayesianNN',\n]\n\ndef get_algorithm_class(algorithm_name):\n    \"\"\"Return the algorithm class with the given name.\"\"\"\n    if algorithm_name not in globals():\n        raise NotImplementedError(\"Algorithm not found: {}\".format(algorithm_name))\n    return globals()[algorithm_name]\n\nclass Algorithm(torch.nn.Module):\n    \"\"\"\n    A subclass of Algorithm implements a domain generalization algorithm.\n    Subclasses should implement the following:\n    - update()\n    - predict()\n    \"\"\"\n    def __init__(self, input_shape, num_classes, architecture, model_size, mixup, device, hparams):\n        super(Algorithm, self).__init__()\n        self.hparams = hparams\n        self.architecture = architecture\n        self.model_size = model_size\n        self.device = device\n        self.mixup = mixup\n        if self.mixup:\n            self.mixup_alpha = self.hparams.get('mixup_alpha', 0.3)\n\n    def update(self, minibatches, unlabeled=None):\n        \"\"\"\n        Perform one update step, given a list of (x, y) tuples for all\n        environments.\n\n        Admits an optional list of unlabeled minibatches from the test domains,\n        when task is domain_adaptation.\n        \"\"\"\n        raise NotImplementedError\n\n    def predict(self, x):\n        raise NotImplementedError\n\nclass ERM(Algorithm):\n    \"\"\"\n    Empirical Risk Minimization (ERM)\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, architecture, model_size, mixup, device, hparams):\n        super(ERM, self).__init__(input_shape, num_classes, architecture, model_size,  mixup, device, hparams)\n        self.featurizer = networks.Featurizer(input_shape, architecture, model_size, self.hparams)\n        print(self.featurizer.n_outputs)\n        self.classifier = networks.Classifier(\n            self.featurizer.n_outputs,\n            num_classes,\n            self.hparams['dropout_rate'],\n            self.hparams['nonlinear_classifier'])\n\n        self.network = nn.Sequential(self.featurizer, self.classifier)\n        self.optimizer = torch.optim.SGD(\n            self.network.parameters(),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay'],\n            momentum=self.hparams['momentum']\n        )\n\n    def update(self, minibatches, unlabeled=None):\n        all_x = torch.cat([x for x, y in minibatches])\n        all_y = torch.cat([y for x, y in minibatches])\n\n        if self.mixup:\n            all_x, all_y_a, all_y_b, lam = mixup_data(all_x, all_y, self.mixup_alpha,  self.device)\n            all_x, all_y_a, all_y_b = map(torch.autograd.Variable, (all_x, all_y_a, all_y_b))\n\n        predictions = self.predict(all_x)\n\n        if self.mixup:\n            loss = mixup_criterion(F.cross_entropy, predictions, all_y_a, all_y_b, lam)\n        else:\n            loss = F.cross_entropy(predictions, all_y)\n\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        if self.mixup:\n            correct = (lam * predictions.argmax(1).eq(all_y_a).float() +\n                       (1 - lam) * predictions.argmax(1).eq(all_y_b).float()).sum().item()\n        else:\n            correct = (predictions.argmax(1) == all_y).sum().item()\n\n        return {'loss': loss.item(), 'correct': correct}\n\n    def predict(self, x):\n        return self.network(x)\n\nclass GLMNet(Algorithm):\n    \"\"\"\n    Generalized Linear Model with Elastic Net Regularization (GLMNet)\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, architecture, model_size, mixup, device, hparams):\n        super(GLMNet, self).__init__(input_shape, num_classes, architecture, model_size,  mixup, device, hparams)\n        self.featurizer = networks.Featurizer(input_shape, architecture, model_size, self.hparams)\n        self.num_classes = num_classes\n        \n        # 使用 SGDClassifier 作为 GLMNet\n        self.classifier = SGDClassifier(\n            loss='log',  # 对数损失，用于分类\n            penalty='elasticnet',  # 弹性网络正则化\n            alpha=self.hparams['glmnet_alpha'],  # 正则化强度\n            l1_ratio=self.hparams['glmnet_l1_ratio'],  # L1 正则化的比例\n            learning_rate='optimal',\n            max_iter=1,  # 每次更新只进行一次迭代\n            warm_start=True,  # 允许增量学习\n            random_state=self.hparams['random_seed']\n        )\n        \n        self.optimizer = torch.optim.SGD(\n            self.featurizer.parameters(),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay'],\n            momentum=self.hparams['momentum']\n        )\n\n    def update(self, minibatches, unlabeled=None):\n        all_x = torch.cat([x for x, y in minibatches])\n        all_y = torch.cat([y for x, y in minibatches])\n        \n        # 提取特征\n        features = self.featurizer(all_x).detach().cpu().numpy()\n        labels = all_y.cpu().numpy()\n        \n        # 更新 GLMNet 分类器\n        self.classifier.partial_fit(features, labels, classes=np.arange(self.num_classes))\n        \n        # 计算损失（仅用于记录，不用于反向传播）\n        loss = -self.classifier.score(features, labels)\n        \n        # 更新特征提取器\n        self.optimizer.zero_grad()\n        features = self.featurizer(all_x)\n        logits = torch.tensor(self.classifier.decision_function(features.detach().cpu().numpy())).to(all_x.device)\n        feature_loss = F.cross_entropy(logits, all_y)\n        feature_loss.backward()\n        self.optimizer.step()\n\n        return {'loss': loss, 'feature_loss': feature_loss.item()}\n\n    def predict(self, x):\n        features = self.featurizer(x).detach().cpu().numpy()\n        return torch.tensor(self.classifier.predict_proba(features)).to(x.device)\n\nclass BayesianNN(Algorithm):\n    \"\"\"\n    Two-layer Bayesian Neural Network\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, hparams):\n        super(BayesianNN, self).__init__(input_shape, num_classes, None, None, hparams)\n        self.input_dim = input_shape[0] * input_shape[1] * input_shape[2]\n        self.hidden_dim1 = hparams['bayesian_hidden_dim1']\n        self.hidden_dim2 = hparams['bayesian_hidden_dim2']\n        self.output_dim = num_classes\n        self.num_samples = hparams['bayesian_num_samples']\n\n        # Initialize parameters\n        self.w1_mu = nn.Parameter(torch.randn(self.input_dim, self.hidden_dim1))\n        self.w1_sigma = nn.Parameter(torch.randn(self.input_dim, self.hidden_dim))\n        self.w2_mu = nn.Parameter(torch.randn(self.hidden_dim2, self.output_dim))\n        self.w2_sigma = nn.Parameter(torch.randn(self.hidden_dim, self.output_dim))\n\n        # Setup Pyro optimizer\n        self.optimizer = SGD({\n            \"lr\": hparams[\"step_length\"],\n            \"weight_decay\": hparams[\"weight_decay\"],\n            \"momentum\": hparams[\"momentum\"]\n        })\n        self.svi = SVI(self.model, self.guide, self.optimizer, loss=Trace_ELBO())\n        \n        self.burn_in = hparams['burn_in']\n        self.step_count = 0\n\n    def model(self, x, y=None):\n        # First layer\n        w1 = pyro.sample(\"w1\", dist.Normal(self.w1_mu, torch.exp(self.w1_sigma)).to_event(2))\n        h = F.relu(x @ w1)\n\n        # Second layer\n        w2 = pyro.sample(\"w2\", dist.Normal(self.w2_mu, torch.exp(self.w2_sigma)).to_event(2))\n        logits = h @ w2\n\n        # Observe data\n        with pyro.plate(\"data\", x.shape[0]):\n            pyro.sample(\"obs\", dist.Categorical(logits=logits), obs=y)\n\n    def guide(self, x, y=None):\n        # First layer\n        w1 = pyro.sample(\"w1\", dist.Normal(self.w1_mu, torch.exp(self.w1_sigma)).to_event(2))\n\n        # Second layer\n        w2 = pyro.sample(\"w2\", dist.Normal(self.w2_mu, torch.exp(self.w2_sigma)).to_event(2))\n\n    def update(self, minibatches, unlabeled=None):\n        all_x = torch.cat([x.view(x.size(0), -1) for x, y in minibatches])\n        all_y = torch.cat([y for x, y in minibatches])\n\n        # Perform SVI step\n        loss = self.svi.step(all_x, all_y)\n        \n        self.step_count += 1\n\n        return {'loss': loss}\n\n    def predict(self, x):\n        x = x.view(x.size(0), -1)\n        num_samples = self.num_samples\n\n        if self.step_count <= self.burn_in:\n            # During burn-in, use point estimates\n            w1 = self.w1_mu\n            w2 = self.w2_mu\n            h = F.relu(x @ w1)\n            logits = h @ w2\n            return F.softmax(logits, dim=-1)\n        else:\n            # After burn-in, use full Bayesian prediction\n            def wrapped_model(x_data):\n                pyro.sample(\"prediction\", dist.Categorical(logits=self.model(x_data)))\n\n            posterior = pyro.infer.Predictive(wrapped_model, guide=self.guide, num_samples=num_samples)(x)\n            predictions = posterior[\"prediction\"]\n            return predictions.float().mean(0)\n\n"
  },
  {
    "path": "transopt/benchmark/HPO/augmentation.py",
    "content": "import torch\nimport numpy as np\nimport random\nfrom transopt.benchmark.HPO.image_options import *\n\n\ndef mixup_data(x, y, alpha=0.3, device='cpu'):\n    '''Returns mixed inputs, pairs of targets, and lambda'''\n    if alpha > 0:\n        lam = np.random.beta(alpha, alpha)\n    else:\n        lam = 1\n\n    batch_size = x.size()[0]\n    index = torch.randperm(batch_size).to(device)\n    print('mixup in the device:', device)\n\n    mixed_x = lam * x + (1 - lam) * x[index, :]\n    y_a, y_b = y, y[index]\n    return mixed_x, y_a, y_b, lam\n\n    \ndef mixup_criterion(criterion, pred, y_a, y_b, lam):\n    return lam * criterion(pred, y_a) + (1 - lam) * criterion(pred, y_b)\n\n\n\n\n\n\nclass Cutout(object):\n    \"\"\"Randomly mask out one or more patches from an image.\n\n    Args:\n        n_holes (int): Number of patches to cut out of each image.\n        length (int): The length (in pixels) of each square patch.\n    \"\"\"\n    def __init__(self, n_holes = None, length = None):\n        if n_holes is None:\n            self.n_holes = 1\n        else:\n            self.n_holes = n_holes\n        if length is None:\n            self.length = 16\n        else:\n            self.length = length\n\n    def __call__(self, img):\n        \"\"\"\n        Args:\n            img (Tensor): Tensor image of size (C, H, W).\n        Returns:\n            Tensor: Image with n_holes of dimension length x length cut out of it.\n        \"\"\"\n        h = img.size(1)\n        w = img.size(2)\n\n        mask = np.ones((h, w), np.float32)\n\n        for n in range(self.n_holes):\n            y = np.random.randint(h)\n            x = np.random.randint(w)\n\n            y1 = np.clip(y - self.length // 2, 0, h)\n            y2 = np.clip(y + self.length // 2, 0, h)\n            x1 = np.clip(x - self.length // 2, 0, w)\n            x2 = np.clip(x + self.length // 2, 0, w)\n\n            mask[y1: y2, x1: x2] = 0.\n\n        mask = torch.from_numpy(mask)\n        mask = mask.expand_as(img)\n        img = img * mask\n\n        return img\n    \n    \n    \n    \nclass ImageNetPolicy(object):\n    \"\"\" Randomly choose one of the best 24 Sub-policies on ImageNet.\n\n        Example:\n        >>> policy = ImageNetPolicy()\n        >>> transformed = policy(image)\n\n        Example as a PyTorch Transform:\n        >>> transform = transforms.Compose([\n        >>>     transforms.Resize(256),\n        >>>     ImageNetPolicy(),\n        >>>     transforms.ToTensor()])\n    \"\"\"\n    def __init__(self, fillcolor=(128, 128, 128)):\n        self.policies = [\n            SubPolicy(0.4, \"posterize\", 8, 0.6, \"rotate\", 9, fillcolor),\n            SubPolicy(0.6, \"solarize\", 5, 0.6, \"autocontrast\", 5, fillcolor),\n            SubPolicy(0.8, \"equalize\", 8, 0.6, \"equalize\", 3, fillcolor),\n            SubPolicy(0.6, \"posterize\", 7, 0.6, \"posterize\", 6, fillcolor),\n            SubPolicy(0.4, \"equalize\", 7, 0.2, \"solarize\", 4, fillcolor),\n\n            SubPolicy(0.4, \"equalize\", 4, 0.8, \"rotate\", 8, fillcolor),\n            SubPolicy(0.6, \"solarize\", 3, 0.6, \"equalize\", 7, fillcolor),\n            SubPolicy(0.8, \"posterize\", 5, 1.0, \"equalize\", 2, fillcolor),\n            SubPolicy(0.2, \"rotate\", 3, 0.6, \"solarize\", 8, fillcolor),\n            SubPolicy(0.6, \"equalize\", 8, 0.4, \"posterize\", 6, fillcolor),\n\n            SubPolicy(0.8, \"rotate\", 8, 0.4, \"color\", 0, fillcolor),\n            SubPolicy(0.4, \"rotate\", 9, 0.6, \"equalize\", 2, fillcolor),\n            SubPolicy(0.0, \"equalize\", 7, 0.8, \"equalize\", 8, fillcolor),\n            SubPolicy(0.6, \"invert\", 4, 1.0, \"equalize\", 8, fillcolor),\n            SubPolicy(0.6, \"color\", 4, 1.0, \"contrast\", 8, fillcolor),\n\n            SubPolicy(0.8, \"rotate\", 8, 1.0, \"color\", 2, fillcolor),\n            SubPolicy(0.8, \"color\", 8, 0.8, \"solarize\", 7, fillcolor),\n            SubPolicy(0.4, \"sharpness\", 7, 0.6, \"invert\", 8, fillcolor),\n            SubPolicy(0.6, \"shearX\", 5, 1.0, \"equalize\", 9, fillcolor),\n            SubPolicy(0.4, \"color\", 0, 0.6, \"equalize\", 3, fillcolor),\n\n            SubPolicy(0.4, \"equalize\", 7, 0.2, \"solarize\", 4, fillcolor),\n            SubPolicy(0.6, \"solarize\", 5, 0.6, \"autocontrast\", 5, fillcolor),\n            SubPolicy(0.6, \"invert\", 4, 1.0, \"equalize\", 8, fillcolor),\n            SubPolicy(0.6, \"color\", 4, 1.0, \"contrast\", 8, fillcolor),\n            SubPolicy(0.8, \"equalize\", 8, 0.6, \"equalize\", 3, fillcolor)\n        ]\n\n    def __call__(self, img):\n        policy_idx = random.randint(0, len(self.policies) - 1)\n        return self.policies[policy_idx](img)\n\n    def __repr__(self):\n        return \"AutoAugment ImageNet Policy\"\n\n\nclass CIFAR10Policy(object):\n    \"\"\" Randomly choose one of the best 25 Sub-policies on CIFAR10.\n\n        Example:\n        >>> policy = CIFAR10Policy()\n        >>> transformed = policy(image)\n\n        Example as a PyTorch Transform:\n        >>> transform=transforms.Compose([\n        >>>     transforms.Resize(256),\n        >>>     CIFAR10Policy(),\n        >>>     transforms.ToTensor()])\n    \"\"\"\n    def __init__(self, fillcolor=(128, 128, 128)):\n        self.policies = [\n            SubPolicy(0.1, \"invert\", 7, 0.2, \"contrast\", 6, fillcolor),\n            SubPolicy(0.7, \"rotate\", 2, 0.3, \"translateX\", 9, fillcolor),\n            SubPolicy(0.8, \"sharpness\", 1, 0.9, \"sharpness\", 3, fillcolor),\n            SubPolicy(0.5, \"shearY\", 8, 0.7, \"translateY\", 9, fillcolor),\n            SubPolicy(0.5, \"autocontrast\", 8, 0.9, \"equalize\", 2, fillcolor),\n\n            SubPolicy(0.2, \"shearY\", 7, 0.3, \"posterize\", 7, fillcolor),\n            SubPolicy(0.4, \"color\", 3, 0.6, \"brightness\", 7, fillcolor),\n            SubPolicy(0.3, \"sharpness\", 9, 0.7, \"brightness\", 9, fillcolor),\n            SubPolicy(0.6, \"equalize\", 5, 0.5, \"equalize\", 1, fillcolor),\n            SubPolicy(0.6, \"contrast\", 7, 0.6, \"sharpness\", 5, fillcolor),\n\n            SubPolicy(0.7, \"color\", 7, 0.5, \"translateX\", 8, fillcolor),\n            SubPolicy(0.3, \"equalize\", 7, 0.4, \"autocontrast\", 8, fillcolor),\n            SubPolicy(0.4, \"translateY\", 3, 0.2, \"sharpness\", 6, fillcolor),\n            SubPolicy(0.9, \"brightness\", 6, 0.2, \"color\", 8, fillcolor),\n            SubPolicy(0.5, \"solarize\", 2, 0.0, \"invert\", 3, fillcolor),\n\n            SubPolicy(0.2, \"equalize\", 0, 0.6, \"autocontrast\", 0, fillcolor),\n            SubPolicy(0.2, \"equalize\", 8, 0.6, \"equalize\", 4, fillcolor),\n            SubPolicy(0.9, \"color\", 9, 0.6, \"equalize\", 6, fillcolor),\n            SubPolicy(0.8, \"autocontrast\", 4, 0.2, \"solarize\", 8, fillcolor),\n            SubPolicy(0.1, \"brightness\", 3, 0.7, \"color\", 0, fillcolor),\n\n            SubPolicy(0.4, \"solarize\", 5, 0.9, \"autocontrast\", 3, fillcolor),\n            SubPolicy(0.9, \"translateY\", 9, 0.7, \"translateY\", 9, fillcolor),\n            SubPolicy(0.9, \"autocontrast\", 2, 0.8, \"solarize\", 3, fillcolor),\n            SubPolicy(0.8, \"equalize\", 8, 0.1, \"invert\", 3, fillcolor),\n            SubPolicy(0.7, \"translateY\", 9, 0.9, \"autocontrast\", 1, fillcolor)\n        ]\n\n    def __call__(self, img):\n        policy_idx = random.randint(0, len(self.policies) - 1)\n        return self.policies[policy_idx](img)\n\n    def __repr__(self):\n        return \"AutoAugment CIFAR10 Policy\"\n    \n    \nclass CIFAR10PolicyPhotometric(object):\n    \"\"\" Randomly choose one of the best 25 Sub-policies on CIFAR10.\n\n        Example:\n        >>> policy = CIFAR10Policy()\n        >>> transformed = policy(image)\n\n        Example as a PyTorch Transform:\n        >>> transform=transforms.Compose([\n        >>>     transforms.Resize(256),\n        >>>     CIFAR10Policy(),\n        >>>     transforms.ToTensor()])\n    \"\"\"\n    def __init__(self, fillcolor=(128, 128, 128)):\n        self.policies = [\n            SubPolicy(0.1, \"invert\", 7, 0.2, \"contrast\", 6, fillcolor),\n            SubPolicy(0.8, \"sharpness\", 1, 0.9, \"sharpness\", 3, fillcolor),\n            SubPolicy(0.5, \"autocontrast\", 8, 0.9, \"equalize\", 2, fillcolor),\n\n            SubPolicy(0.4, \"color\", 3, 0.6, \"brightness\", 7, fillcolor),\n            SubPolicy(0.3, \"sharpness\", 9, 0.7, \"brightness\", 9, fillcolor),\n            SubPolicy(0.6, \"equalize\", 5, 0.5, \"equalize\", 1, fillcolor),\n            SubPolicy(0.6, \"contrast\", 7, 0.6, \"sharpness\", 5, fillcolor),\n\n            SubPolicy(0.7, \"color\", 7, 0.5, \"translateX\", 8, fillcolor),\n            SubPolicy(0.3, \"equalize\", 7, 0.4, \"autocontrast\", 8, fillcolor),\n            SubPolicy(0.9, \"brightness\", 6, 0.2, \"color\", 8, fillcolor),\n            SubPolicy(0.5, \"solarize\", 2, 0.0, \"invert\", 3, fillcolor),\n\n            SubPolicy(0.2, \"equalize\", 0, 0.6, \"autocontrast\", 0, fillcolor),\n            SubPolicy(0.2, \"equalize\", 8, 0.6, \"equalize\", 4, fillcolor),\n            SubPolicy(0.9, \"color\", 9, 0.6, \"equalize\", 6, fillcolor),\n            SubPolicy(0.8, \"autocontrast\", 4, 0.2, \"solarize\", 8, fillcolor),\n            SubPolicy(0.1, \"brightness\", 3, 0.7, \"color\", 0, fillcolor),\n\n            SubPolicy(0.4, \"solarize\", 5, 0.9, \"autocontrast\", 3, fillcolor),\n            SubPolicy(0.9, \"autocontrast\", 2, 0.8, \"solarize\", 3, fillcolor),\n            SubPolicy(0.8, \"equalize\", 8, 0.1, \"invert\", 3, fillcolor),\n        ]\n\n    def __call__(self, img):\n        policy_idx = random.randint(0, len(self.policies) - 1)\n        return self.policies[policy_idx](img)\n\n    def __repr__(self):\n        return \"AutoAugment CIFAR10 Photometric Policy\"\n\n\nclass CIFAR10PolicyGeometric(object):\n    \"\"\" Randomly choose one of the best 25 Sub-policies on CIFAR10.\n\n        Example:\n        >>> policy = CIFAR10Policy()\n        >>> transformed = policy(image)\n\n        Example as a PyTorch Transform:\n        >>> transform=transforms.Compose([\n        >>>     transforms.Resize(256),\n        >>>     CIFAR10Policy(),\n        >>>     transforms.ToTensor()])\n    \"\"\"\n    def __init__(self, fillcolor=(128, 128, 128)):\n        self.policies = [\n            SubPolicy(0.7, \"rotate\", 2, 0.3, \"translateX\", 9, fillcolor),\n            SubPolicy(0.5, \"shearY\", 8, 0.7, \"translateY\", 9, fillcolor),\n            SubPolicy(0.5, \"shearX\", 7, 0.3, \"posterize\", 7, fillcolor),\n            SubPolicy(0.9, \"translateY\", 9, 0.7, \"translateY\", 9, fillcolor),\n            SubPolicy(0.7, \"translateX\", 9, 0.9, \"autocontrast\", 1, fillcolor)\n        ]\n\n    def __call__(self, img):\n        policy_idx = random.randint(0, len(self.policies) - 1)\n        return self.policies[policy_idx](img)\n\n    def __repr__(self):\n        return \"AutoAugment CIFAR10 Geometric Policy\"\n\n\nclass SVHNPolicy(object):\n    \"\"\" Randomly choose one of the best 25 Sub-policies on SVHN.\n\n        Example:\n        >>> policy = SVHNPolicy()\n        >>> transformed = policy(image)\n\n        Example as a PyTorch Transform:\n        >>> transform=transforms.Compose([\n        >>>     transforms.Resize(256),\n        >>>     SVHNPolicy(),\n        >>>     transforms.ToTensor()])\n    \"\"\"\n    def __init__(self, fillcolor=(128, 128, 128)):\n        self.policies = [\n            SubPolicy(0.9, \"shearX\", 4, 0.2, \"invert\", 3, fillcolor),\n            SubPolicy(0.9, \"shearY\", 8, 0.7, \"invert\", 5, fillcolor),\n            SubPolicy(0.6, \"equalize\", 5, 0.6, \"solarize\", 6, fillcolor),\n            SubPolicy(0.9, \"invert\", 3, 0.6, \"equalize\", 3, fillcolor),\n            SubPolicy(0.6, \"equalize\", 1, 0.9, \"rotate\", 3, fillcolor),\n\n            SubPolicy(0.9, \"shearX\", 4, 0.8, \"autocontrast\", 3, fillcolor),\n            SubPolicy(0.9, \"shearY\", 8, 0.4, \"invert\", 5, fillcolor),\n            SubPolicy(0.9, \"shearY\", 5, 0.2, \"solarize\", 6, fillcolor),\n            SubPolicy(0.9, \"invert\", 6, 0.8, \"autocontrast\", 1, fillcolor),\n            SubPolicy(0.6, \"equalize\", 3, 0.9, \"rotate\", 3, fillcolor),\n\n            SubPolicy(0.9, \"shearX\", 4, 0.3, \"solarize\", 3, fillcolor),\n            SubPolicy(0.8, \"shearY\", 8, 0.7, \"invert\", 4, fillcolor),\n            SubPolicy(0.9, \"equalize\", 5, 0.6, \"translateY\", 6, fillcolor),\n            SubPolicy(0.9, \"invert\", 4, 0.6, \"equalize\", 7, fillcolor),\n            SubPolicy(0.3, \"contrast\", 3, 0.8, \"rotate\", 4, fillcolor),\n\n            SubPolicy(0.8, \"invert\", 5, 0.0, \"translateY\", 2, fillcolor),\n            SubPolicy(0.7, \"shearY\", 6, 0.4, \"solarize\", 8, fillcolor),\n            SubPolicy(0.6, \"invert\", 4, 0.8, \"rotate\", 4, fillcolor),\n            SubPolicy(0.3, \"shearY\", 7, 0.9, \"translateX\", 3, fillcolor),\n            SubPolicy(0.1, \"shearX\", 6, 0.6, \"invert\", 5, fillcolor),\n\n            SubPolicy(0.7, \"solarize\", 2, 0.6, \"translateY\", 7, fillcolor),\n            SubPolicy(0.8, \"shearY\", 4, 0.8, \"invert\", 8, fillcolor),\n            SubPolicy(0.7, \"shearX\", 9, 0.8, \"translateY\", 3, fillcolor),\n            SubPolicy(0.8, \"shearY\", 5, 0.7, \"autocontrast\", 3, fillcolor),\n            SubPolicy(0.7, \"shearX\", 2, 0.1, \"invert\", 5, fillcolor)\n        ]\n\n    def __call__(self, img):\n        policy_idx = random.randint(0, len(self.policies) - 1)\n        return self.policies[policy_idx](img)\n\n    def __repr__(self):\n        return \"AutoAugment SVHN Policy\"\n\n\nclass SubPolicy(object):\n    def __init__(self, p1, operation1, magnitude_idx1, p2, operation2, magnitude_idx2, fillcolor=(128, 128, 128)):\n        ranges = {\n            \"shearX\": np.linspace(0, 0.3, 10),\n            \"shearY\": np.linspace(0, 0.3, 10),\n            \"translateX\": np.linspace(0, 150 / 331, 10),\n            \"translateY\": np.linspace(0, 150 / 331, 10),\n            \"rotate\": np.linspace(0, 30, 10),\n            \"color\": np.linspace(0.0, 0.9, 10),\n            \"posterize\": np.round(np.linspace(8, 4, 10), 0).astype(int),  # 修改这里\n            \"solarize\": np.linspace(256, 0, 10),\n            \"contrast\": np.linspace(0.0, 0.9, 10),\n            \"sharpness\": np.linspace(0.0, 0.9, 10),\n            \"brightness\": np.linspace(0.0, 0.9, 10),\n            \"autocontrast\": [0] * 10,\n            \"equalize\": [0] * 10,\n            \"invert\": [0] * 10\n        }\n\n        func = {\n            \"shearX\": ShearX(fillcolor=fillcolor),\n            \"shearY\": ShearY(fillcolor=fillcolor),\n            \"translateX\": TranslateX(fillcolor=fillcolor),\n            \"translateY\": TranslateY(fillcolor=fillcolor),\n            \"rotate\": Rotate(),\n            \"color\": Color(),\n            \"posterize\": Posterize(),\n            \"solarize\": Solarize(),\n            \"contrast\": Contrast(),\n            \"sharpness\": Sharpness(),\n            \"brightness\": Brightness(),\n            \"autocontrast\": AutoContrast(),\n            \"equalize\": Equalize(),\n            \"invert\": Invert()\n        }\n\n        self.p1 = p1\n        self.operation1 = func[operation1]\n        self.magnitude1 = ranges[operation1][magnitude_idx1]\n        self.p2 = p2\n        self.operation2 = func[operation2]\n        self.magnitude2 = ranges[operation2][magnitude_idx2]\n\n    def __call__(self, img):\n        if random.random() < self.p1:\n            img = self.operation1(img, self.magnitude1)\n        if random.random() < self.p2:\n            img = self.operation2(img, self.magnitude2)\n        return img"
  },
  {
    "path": "transopt/benchmark/HPO/datasets.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport os\nimport numpy as np\nimport torch\nfrom PIL import Image, ImageFile\nfrom torchvision import transforms\nfrom torch.utils.data import TensorDataset, Subset, ConcatDataset, Dataset\nfrom torchvision.datasets import MNIST, ImageNet, CIFAR10, CIFAR100\n\n\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE\nfrom torchvision import datasets, transforms\nfrom torch.utils.data import DataLoader\n\n\n\nfrom robustbench.data import load_cifar10c, load_cifar100c, load_imagenetc\n\nfrom transopt.benchmark.HPO.augmentation import ImageNetPolicy, CIFAR10Policy, CIFAR10PolicyGeometric, CIFAR10PolicyPhotometric, Cutout\n\nImageFile.LOAD_TRUNCATED_IMAGES = True\n\n\n\ndef data_transform(dataset_name, augmentation_name=None):\n    if dataset_name.lower() == 'cifar10' or dataset_name.lower() == 'cifar100':\n        mean = (0.4914, 0.4822, 0.4465)\n        std = (0.2023, 0.1994, 0.2010)\n        size = 32\n    elif dataset_name.lower() == 'imagenet':\n        mean = (0.485, 0.456, 0.406)\n        std = (0.229, 0.224, 0.225)\n        size = 224\n    else:\n        raise ValueError(f\"Unsupported dataset: {dataset_name}\")\n\n    # transform_list = [transforms.ToPILImage(), transforms.ToTensor(), transforms.Normalize(mean, std)]\n    transform_list = [transforms.ToPILImage(), transforms.ToTensor()]\n\n    if augmentation_name:\n        if dataset_name.lower() in ['cifar10', 'cifar100']:\n            if augmentation_name.lower() == 'cutout':\n                transform_list.insert(-1,Cutout(n_holes=1, length=16))\n            elif augmentation_name.lower() == 'geometric':\n                transform_list.insert(1, CIFAR10PolicyGeometric())\n            elif augmentation_name.lower() == 'photometric':\n                transform_list.insert(1, CIFAR10PolicyPhotometric())\n            elif augmentation_name.lower() == 'autoaugment':\n                transform_list.insert(1, CIFAR10Policy())\n            elif augmentation_name.lower() == 'mixup':\n                print(\"Mixup should be applied during training, not as part of the transform.\")\n            else:\n                raise ValueError(f\"Unsupported augmentation strategy for CIFAR: {augmentation_name}\")\n        elif dataset_name.lower() == 'imagenet':\n            if augmentation_name.lower() == 'cutout':\n                transform_list.append(Cutout())\n            elif augmentation_name.lower() == 'autoaugment':\n                transform_list.insert(0, ImageNetPolicy())\n            elif augmentation_name.lower() == 'mixup':\n                print(\"Mixup should be applied during training, not as part of the transform.\")\n            else:\n                raise ValueError(f\"Unsupported augmentation strategy for ImageNet: {augmentation_name}\")\n        else:\n            raise ValueError(f\"Unsupported dataset for augmentation: {dataset_name}\")\n    print(transform_list)\n    return transforms.Compose(transform_list)\n\ndef get_dataset_class(dataset_name):\n    \"\"\"Return the dataset class with the given name.\"\"\"\n    if dataset_name not in globals():\n        raise NotImplementedError(\"Dataset not found: {}\".format(dataset_name))\n    return globals()[dataset_name]\n\ndef num_environments(dataset_name):\n    return len(get_dataset_class(dataset_name).ENVIRONMENTS)\n\nclass Dataset:\n    N_STEPS = 5001           # Default, subclasses may override\n    CHECKPOINT_FREQ = 100    # Default, subclasses may override\n    N_WORKERS = 1            # Default, subclasses may override\n    ENVIRONMENTS = None      # Subclasses should override\n    INPUT_SHAPE = None       # Subclasses should override\n\n    def __getitem__(self, index):\n        return self.datasets[index]\n\n    def __len__(self):\n        return len(self.datasets)\n\nclass RobCifar10(Dataset):\n    def __init__(self, root=None, augment=False):\n        super().__init__()\n        if root is None:        \n            user_home = os.path.expanduser('~')\n            root = os.path.join(user_home, 'transopt_tmp/data')\n\n        # Load original CIFAR-10 dataset\n        original_dataset_tr = CIFAR10(root, train=True, download=True)\n        original_dataset_te = CIFAR10(root, train=False, download=True)\n\n        original_images = original_dataset_tr.data\n        original_labels = torch.tensor(original_dataset_tr.targets)\n\n        shuffle = torch.randperm(len(original_images))\n        original_images = original_images[shuffle]\n        original_labels = original_labels[shuffle]\n        \n        dataset_transform = data_transform('cifar10', augment)\n        normalized_images = data_transform('cifar10', None)\n    \n        transformed_images = torch.stack([dataset_transform(img) for img in original_images])\n        standard_test_images = torch.stack([normalized_images(img) for img in original_dataset_te.data])\n\n        self.input_shape = (3, 32, 32)\n        self.num_classes = 10\n        self.datasets = {}\n\n        # Split into train and validation sets\n        val_size = len(transformed_images) // 10\n        self.datasets['train'] = TensorDataset(transformed_images[:-val_size], original_labels[:-val_size])\n        self.datasets['val'] = TensorDataset(transformed_images[-val_size:], original_labels[-val_size:])\n        \n        # Standard test set\n        self.datasets['test_standard'] = TensorDataset(standard_test_images, torch.tensor(original_dataset_te.targets))\n        \n        # Corruption test sets\n        self.corruptions = [\n            'gaussian_noise', 'shot_noise', 'impulse_noise', 'defocus_blur',\n            'glass_blur', 'motion_blur', 'zoom_blur', 'snow', 'frost', 'fog',\n            'brightness', 'contrast', 'elastic_transform', 'pixelate', 'jpeg_compression'\n        ]\n        for corruption in self.corruptions:\n            x_test_corrupt, y_test_corrupt = load_cifar10c(n_examples=5000, corruptions=[corruption], severity=5, data_dir=root)\n            x_test_corrupt = torch.stack([normalized_images(img) for img in x_test_corrupt])\n            self.datasets[f'test_corruption_{corruption}'] = TensorDataset(x_test_corrupt, y_test_corrupt)\n\n        # Load CIFAR-10.1 dataset\n        cifar101_path = os.path.join(root, 'cifar10.1_v6_data.npy')\n        cifar101_labels_path = os.path.join(root, 'cifar10.1_v6_labels.npy')\n        if os.path.exists(cifar101_path) and os.path.exists(cifar101_labels_path):\n            cifar101_data = np.load(cifar101_path)\n            cifar101_labels = np.load(cifar101_labels_path)\n            cifar101_data = torch.from_numpy(cifar101_data).float() / 255.0\n            cifar101_data = cifar101_data.permute(0, 3, 1, 2)  # Change from (N, 32, 32, 3) to (N, 3, 32, 32)\n            cifar101_data = torch.stack([normalized_images(img) for img in cifar101_data])\n            cifar101_labels = torch.from_numpy(cifar101_labels).long()\n            self.datasets['test_cifar10.1'] = TensorDataset(cifar101_data, cifar101_labels)\n        else:\n            print(\"CIFAR-10.1 dataset not found. Please download it to the data directory.\")\n\n        # Load CIFAR-10.2 dataset\n        cifar102_path = os.path.join(root, 'cifar102_test.npz')\n        if os.path.exists(cifar102_path):\n            cifar102_data = np.load(cifar102_path)\n            cifar102_images = cifar102_data['images']\n            cifar102_labels = cifar102_data['labels']\n            cifar102_images = torch.from_numpy(cifar102_images).float() / 255.0\n            cifar102_images = cifar102_images.permute(0, 3, 1, 2)  # Change from (N, 32, 32, 3) to (N, 3, 32, 32)\n            cifar102_images = torch.stack([normalized_images(img) for img in cifar102_images])\n            cifar102_labels = torch.from_numpy(cifar102_labels).long()\n            self.datasets['test_cifar10.2'] = TensorDataset(cifar102_images, cifar102_labels)\n        else:\n            print(\"CIFAR-10.2 dataset not found. Please download it to the data directory.\")\n\n    def get_available_test_set_names(self):\n        \"\"\"\n        Return a list of available test set names.\n        \"\"\"\n        return list(self.datasets.keys())\n\n\n    def get_test_set(self, name):\n        \"\"\"\n        Get a specific test set by name.\n        Available names: 'standard', 'corruption_<corruption_name>', 'cifar10.1', 'cifar10.2'\n        \"\"\"\n        return self.test_sets.get(name, None)\n\n    def get_all_test_sets(self):\n        \"\"\"\n        Return all available test sets.\n        \"\"\"\n        return self.test_sets\n\nclass RobCifar100(Dataset):\n    def __init__(self, root, augment=False):\n        super().__init__()\n        if root is None:        \n            user_home = os.path.expanduser('~')\n            root = os.path.join(user_home, 'transopt_tmp/data')\n\n        original_dataset_tr = CIFAR100(root, train=True, download=True)\n        original_dataset_te = CIFAR100(root, train=False, download=True)\n\n        original_images = original_dataset_tr.data\n        original_labels = torch.tensor(original_dataset_tr.targets)\n\n        shuffle = torch.randperm(len(original_images))\n        original_images = original_images[shuffle]\n        original_labels = original_labels[shuffle]\n\n        dataset_transform = self.get_transform(augment)\n\n        transformed_images = torch.stack([dataset_transform(img) for img in original_images])\n\n        self.input_shape = (3, 32, 32)\n        self.num_classes = 100\n        self.datasets = TensorDataset(transformed_images, original_labels)\n        \n        # Standard test set\n        test_images = torch.tensor(original_dataset_te.data).float() / 255.0\n        test_labels = torch.tensor(original_dataset_te.targets)\n        self.test_sets = {'standard': TensorDataset(test_images, test_labels)}\n\n        # Corruption test sets\n        corruptions = [\n            'gaussian_noise', 'shot_noise', 'impulse_noise', 'defocus_blur',\n            'glass_blur', 'motion_blur', 'zoom_blur', 'snow', 'frost', 'fog',\n            'brightness', 'contrast', 'elastic_transform', 'pixelate', 'jpeg_compression'\n        ]\n        for corruption in corruptions:\n            x_test, y_test = load_cifar100c(n_examples=10000, corruptions=[corruption], severity=5, data_dir=root)\n            self.test_sets[f'corruption_{corruption}'] = TensorDataset(x_test, y_test)\n\n    def get_transform(self, augment):\n        if augment:\n            return transforms.Compose([\n                transforms.ToPILImage(),\n                transforms.RandomCrop(32, padding=4),\n                transforms.RandomHorizontalFlip(),\n                transforms.ToTensor(),\n                transforms.Normalize((0.5071, 0.4867, 0.4408), (0.2675, 0.2565, 0.2761)),\n            ])\n        else:\n            return transforms.Compose([\n                transforms.ToPILImage(),\n                transforms.ToTensor(),\n                transforms.Normalize((0.5071, 0.4867, 0.4408), (0.2675, 0.2565, 0.2761)),\n            ])\n\n    def get_test_set(self, name):\n        \"\"\"\n        Get a specific test set by name.\n        Available names: 'standard', 'corruption_<corruption_name>'\n        \"\"\"\n        return self.test_sets.get(name, None)\n\n    def get_all_test_sets(self):\n        \"\"\"\n        Return all available test sets.\n        \"\"\"\n        return self.test_sets\n\n\nclass RobImageNet(Dataset):\n    def __init__(self, root, augment=False):\n        super().__init__()\n        if root is None:        \n            user_home = os.path.expanduser('~')\n            root = os.path.join(user_home, 'transopt_tmp/data')\n\n        transform = self.get_transform(augment)\n\n        self.datasets = ImageNet(root=root, split='train', transform=transform)\n        self.test_sets = {'standard': ImageNet(root=root, split='val', transform=self.get_transform(False))}\n\n        self.input_shape = (3, 224, 224)\n        self.num_classes = 1000\n\n        # Corruption test sets\n        corruptions = [\n            'gaussian_noise', 'shot_noise', 'impulse_noise', 'defocus_blur',\n            'glass_blur', 'motion_blur', 'zoom_blur', 'snow', 'frost', 'fog',\n            'brightness', 'contrast', 'elastic_transform', 'pixelate', 'jpeg_compression'\n        ]\n        for corruption in corruptions:\n            x_test, y_test = load_imagenetc(n_examples=5000, corruptions=[corruption], severity=5, data_dir=root)\n            self.test_sets[f'corruption_{corruption}'] = TensorDataset(x_test, y_test)\n\n    def get_transform(self, augment):\n        if augment:\n            print(\"Data augmentation is enabled.\")\n            return transforms.Compose([\n                transforms.RandomResizedCrop(224, scale=(0.7, 1.0)),\n                transforms.RandomHorizontalFlip(),\n                transforms.ColorJitter(0.3, 0.3, 0.3, 0.3),\n                transforms.RandomGrayscale(),\n                transforms.ToTensor(),\n                transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n            ])\n        else:\n            print(\"Data augmentation is disabled.\")\n            return transforms.Compose([\n                transforms.Resize(256),\n                transforms.CenterCrop(224),\n                transforms.ToTensor(),\n                transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n            ])\n\n    def get_test_set(self, name):\n        \"\"\"\n        Get a specific test set by name.\n        Available names: 'standard', 'corruption_<corruption_name>'\n        \"\"\"\n        return self.test_sets.get(name, None)\n\n    def get_all_test_sets(self):\n        \"\"\"\n        Return all available test sets.\n        \"\"\"\n        return self.test_sets\n\ndef test_dataset(dataset_name='cifar10', num_samples=5):\n    # Set up the dataset\n    if dataset_name.lower() == 'cifar10':\n        dataset = RobCifar10(root=None, augment=True)\n    else:\n        raise ValueError(f\"Unsupported dataset: {dataset_name}\")\n\n    # Test training data\n    assert 'train' in dataset.datasets, \"Training dataset is missing\"\n    print(f\"Training dataset size: {len(dataset.datasets['train'])}\")\n    train_sample = dataset.datasets['train'][0]\n    print(f\"Training data shape: {train_sample[0].shape}\")\n    print(f\"Training label shape: {train_sample[1].shape}\")\n\n    # Test validation data\n    assert 'val' in dataset.datasets, \"Validation dataset is missing\"\n    print(f\"Validation dataset size: {len(dataset.datasets['val'])}\")\n\n    # Test standard test set\n    assert 'test_standard' in dataset.datasets, \"Standard test set is missing\"\n    print(f\"Standard test set size: {len(dataset.datasets['test_standard'])}\")\n\n    # Test corruption test sets\n    for corruption in dataset.corruptions[:num_samples]:\n        corruption_key = f'test_corruption_{corruption}'\n        assert corruption_key in dataset.datasets, f\"Corruption test set '{corruption}' is missing\"\n        print(f\"Corruption test set '{corruption}' size: {len(dataset.datasets[corruption_key])}\")\n\n    # Test additional test sets (CIFAR-10.1 and CIFAR-10.2)\n    for additional_test in ['test_cifar10.1', 'test_cifar10.2']:\n        if additional_test in dataset.datasets:\n            print(f\"{additional_test.upper()} test set size: {len(dataset.datasets[additional_test])}\")\n        else:\n            print(f\"{additional_test.upper()} test set not found\")\n\n    # Test data loading\n    print(\"\\nTesting data loading:\")\n    for key, data in dataset.datasets.items():\n        try:\n            sample = data[0]\n            print(f\"Successfully loaded sample from {key}\")\n            if isinstance(sample, tuple):\n                print(f\"  Sample shape: {sample[0].shape}, Label: {sample[1]}\")\n            else:\n                print(f\"  Sample shape: {sample.shape}\")\n        except Exception as e:\n            print(f\"Error loading data from {key}: {str(e)}\")\n\n    print(f\"\\nAll tests for {dataset_name} passed successfully!\")\n    \ndef visualize_dataset_tsne(dataset_name='cifar10', n_samples=1000, perplexity=30, n_iter=1000):\n    # Set up data transformation\n    non_augment = data_transform(dataset_name, augmentation_name=None)\n    augment = data_transform(dataset_name, augmentation_name='photometric')\n\n    # Load dataset\n    if dataset_name.lower() == 'cifar10':\n        dataset = RobCifar10(root=None, augment=False)\n    else:\n        raise ValueError(f\"Unsupported dataset: {dataset_name}\")\n\n    # Prepare data for t-SNE\n    all_images = []\n    all_labels = []\n    dataset_types = []\n\n    for key, data in dataset.datasets.items():\n        loader = DataLoader(data, batch_size=n_samples, shuffle=True)\n        images, labels = next(iter(loader))\n\n        if key == 'train':\n            origin_images = torch.stack([non_augment(img) for img in images])\n            all_images.append(origin_images)\n            all_labels.append(labels)\n            dataset_types.extend(['train_without_aug'] * len(origin_images))\n            \n            augmented_images = torch.stack([augment(img) for img in images])\n            all_images.append(augmented_images)\n            all_labels.append(labels)\n            dataset_types.extend(['augmented'] * len(augmented_images))\n            continue\n        \n        if key.startswith('test_') and key != 'test_standard':\n            all_images.append(images)\n            all_labels.append(labels)\n            dataset_types.extend(['test_ds'] * len(images))\n        # else:\n        #     all_images.append(images)\n        #     all_labels.append(labels)\n        #     dataset_types.extend([key] * len(images))\n\n    all_images = torch.cat(all_images, dim=0)\n    all_labels = torch.cat(all_labels, dim=0)\n    all_images_flat = all_images.view(all_images.size(0), -1).numpy()\n\n    # Apply t-SNE\n    tsne = TSNE(n_components=2, perplexity=perplexity, n_iter=n_iter, random_state=42)\n    tsne_results = tsne.fit_transform(all_images_flat)\n\n    # Visualize results\n    plt.figure(figsize=(16, 12))\n    \n    # Define a fixed color map\n    fixed_color_map = {\n        'train_without_aug': '#1f77b4',  # blue\n        'augmented': '#ff7f0e',          # orange\n        'val': '#2ca02c',                # green\n        'test_standard': '#d62728',      # red\n        'test_ds': '#9467bd',            # purple\n        'test_cifar10.1': '#8c564b',     # brown\n        'test_cifar10.2': '#e377c2'      # pink\n    }\n    \n    for dtype in fixed_color_map.keys():\n        mask = np.array(dataset_types) == dtype\n        if np.any(mask):  # Only plot if there are data points for this type\n            plt.scatter(tsne_results[mask, 0], tsne_results[mask, 1], \n                        c=fixed_color_map[dtype], label=dtype, alpha=0.6)\n\n    plt.legend()\n    plt.title(f't-SNE visualization of {dataset_name} dataset')\n    plt.savefig(f'{dataset_name}_tsne_visualization.png')\n    plt.close()\n\n    print(f\"t-SNE visualization has been saved as '{dataset_name}_tsne_visualization.png'\")\n\nif __name__ == \"__main__\":\n    # test_dataset('cifar10')\n    # test_dataset('cifar100')\n    # test_dataset('imagenet')\n\n    visualize_dataset_tsne(dataset_name='cifar10', n_samples=1000)\n\n    # ... (之后的代码保持不变)"
  },
  {
    "path": "transopt/benchmark/HPO/fast_data_loader.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nimport torch\n\nclass _InfiniteSampler(torch.utils.data.Sampler):\n    \"\"\"Wraps another Sampler to yield an infinite stream.\"\"\"\n    def __init__(self, sampler):\n        self.sampler = sampler\n\n    def __iter__(self):\n        while True:\n            for batch in self.sampler:\n                yield batch\n\nclass InfiniteDataLoader:\n    def __init__(self, dataset, batch_size, num_workers):\n        super().__init__()\n\n\n        sampler = torch.utils.data.RandomSampler(dataset,\n            replacement=True)\n\n        batch_sampler = torch.utils.data.BatchSampler(\n            sampler,\n            batch_size=batch_size,\n            drop_last=True)\n\n        self._infinite_iterator = iter(torch.utils.data.DataLoader(\n            dataset,\n            num_workers=num_workers,\n            batch_sampler=_InfiniteSampler(batch_sampler)\n        ))\n\n    def __iter__(self):\n        while True:\n            yield next(self._infinite_iterator)\n\n    def __len__(self):\n        raise ValueError\n\nclass FastDataLoader:\n    \"\"\"DataLoader wrapper with slightly improved speed by not respawning worker\n    processes at every epoch.\"\"\"\n    def __init__(self, dataset, batch_size, num_workers):\n        super().__init__()\n\n        batch_sampler = torch.utils.data.BatchSampler(\n            torch.utils.data.RandomSampler(dataset, replacement=False),\n            batch_size=batch_size,\n            drop_last=False\n        )\n\n        self._infinite_iterator = iter(torch.utils.data.DataLoader(\n            dataset,\n            num_workers=num_workers,\n            batch_sampler=_InfiniteSampler(batch_sampler)\n        ))\n\n        self._length = len(batch_sampler)\n\n    def __iter__(self):\n        for _ in range(len(self)):\n            yield next(self._infinite_iterator)\n\n    def __len__(self):\n        return self._length\n"
  },
  {
    "path": "transopt/benchmark/HPO/hparams_registry.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport numpy as np\n\n\ndef get_hparams(algorithm, dataset, random_seed, model_size=None, architecture='resnet'):\n    \"\"\"\n    Global registry of hyperparams. Each entry is a (default, random) tuple.\n    New algorithms / networks / etc. should add entries here.\n    \"\"\"\n    hparams = {}\n    hparam_space = get_hparam_space(algorithm, model_size, architecture)\n    random_state = np.random.RandomState(random_seed)\n\n    for name, (hparam_type, range_or_values) in hparam_space.items():\n        if hparam_type == 'categorical':\n            default_val = range_or_values[0]\n            random_val = random_state.choice(range_or_values)\n        elif hparam_type == 'float':\n            default_val = sum(range_or_values) / 2\n            random_val = random_state.uniform(*range_or_values)\n        elif hparam_type == 'int':\n            default_val = int(sum(range_or_values) / 2)\n            random_val = random_state.randint(*range_or_values)\n        elif hparam_type == 'log':\n            default_val = 10 ** (sum(range_or_values) / 2)\n            random_val = 10 ** random_state.uniform(*range_or_values)\n        else:\n            raise ValueError(f\"Unknown hparam type: {hparam_type}\")\n\n        hparams[name] = (default_val, random_val)\n\n    return hparams\n\ndef default_hparams(algorithm, dataset, model_size='small', architecture='resnet'):\n    return {a: b for a, (b, c) in get_hparams(algorithm, dataset, 0, model_size, architecture).items()}\n\ndef random_hparams(algorithm, dataset, seed, model_size='small', architecture='resnet'):\n    return {a: c for a, (b, c) in get_hparams(algorithm, dataset, seed, model_size, architecture).items()}\n\ndef get_hparam_space(algorithm, model_size=None, architecture='resnet'):\n    \"\"\"\n    Returns a dictionary of hyperparameter spaces for the given algorithm and dataset.\n    Each entry is a tuple of (type, range) where type is 'float', 'int', or 'categorical'.\n    \"\"\"\n    hparam_space = {}\n\n    if algorithm in ['ERM', 'GLMNet', 'BayesianNN']:\n        hparam_space['lr'] = ('log', (-6, -2))\n        hparam_space['weight_decay'] = ('log', (-7, -4))\n        hparam_space['momentum'] = ('float', (0.5, 0.999))\n        hparam_space['batch_size'] = ('categorical', [16, 32, 64, 128])\n\n    if algorithm == 'ERM':\n        # hparam_space['batch_size'] = ('categorical', [16, 32, 64, 128])\n        hparam_space['dropout_rate'] = ('float', (0, 0.5))\n        if architecture.lower() == 'cnn':\n            hparam_space['hidden_dim1'] = ('categorical', [32, 64, 128])\n            hparam_space['hidden_dim2'] = ('categorical', [32, 64, 128])\n\n    if algorithm == 'GLMNet':\n        hparam_space['glmnet_alpha'] = ('log', (-4, 1))\n        hparam_space['glmnet_l1_ratio'] = ('float', (0, 1))\n\n    if algorithm == 'BayesianNN':\n        hparam_space['bayesian_num_samples'] = ('categorical', [5, 10, 20, 50])\n        hparam_space['bayesian_hidden_dim1'] = ('categorical', [32, 64, 128, 256])\n        hparam_space['bayesian_hidden_dim2'] = ('categorical', [32, 64, 128, 256])\n        hparam_space['step_length'] = ('log', (-4, -1))\n        hparam_space['burn_in'] = ('categorical', [500, 1000, 2000, 5000])\n\n    # Add hidden dimensions for CNN architecture\n\n\n\n    return hparam_space\n\ndef test_hparam_registry():\n    algorithms = ['ERM', 'GLMNet', 'BayesianNN']\n    datasets = ['RobCifar10', 'RobCifar100', 'RobImageNet']\n    architectures = ['resnet', 'wideresnet', 'densenet', 'alexnet', 'cnn']\n\n    for algorithm in algorithms:\n        for dataset in datasets:\n            print(f\"\\nTesting: Algorithm={algorithm}, Dataset={dataset}\")\n\n            # Get default hyperparameters\n            default_hparam = default_hparams(algorithm, dataset)\n            print(\"\\nDefault hyperparameters:\")\n            for hparam, value in default_hparam.items():\n                print(f\"  {hparam}: {value}\")\n\n            # Get random hyperparameters\n            random_hparam = random_hparams(algorithm, dataset, seed=42)\n            print(\"\\nRandom hyperparameters:\")\n            for hparam, value in random_hparam.items():\n                print(f\"  {hparam}: {value}\")\n\n            # Get hyperparameter space\n            hparam_space = get_hparam_space(algorithm, dataset)\n            print(\"\\nHyperparameter space:\")\n            for hparam, (htype, hrange) in hparam_space.items():\n                print(f\"  {hparam}: type={htype}, range={hrange}\")\n\n            print(\"\\n\" + \"=\"*50)\n\nif __name__ == \"__main__\":\n    test_hparam_registry()"
  },
  {
    "path": "transopt/benchmark/HPO/image_options.py",
    "content": "from PIL import Image, ImageEnhance, ImageOps\nimport random\n\n\nclass ShearX(object):\n    def __init__(self, fillcolor=(128, 128, 128)):\n        self.fillcolor = fillcolor\n\n    def __call__(self, x, magnitude):\n        return x.transform(\n            x.size, Image.AFFINE, (1, magnitude * random.choice([-1, 1]), 0, 0, 1, 0),\n            Image.BICUBIC, fillcolor=self.fillcolor)\n\n\nclass ShearY(object):\n    def __init__(self, fillcolor=(128, 128, 128)):\n        self.fillcolor = fillcolor\n\n    def __call__(self, x, magnitude):\n        return x.transform(\n            x.size, Image.AFFINE, (1, 0, 0, magnitude * random.choice([-1, 1]), 1, 0),\n            Image.BICUBIC, fillcolor=self.fillcolor)\n\n\nclass TranslateX(object):\n    def __init__(self, fillcolor=(128, 128, 128)):\n        self.fillcolor = fillcolor\n\n    def __call__(self, x, magnitude):\n        return x.transform(\n            x.size, Image.AFFINE, (1, 0, magnitude * x.size[0] * random.choice([-1, 1]), 0, 1, 0),\n            fillcolor=self.fillcolor)\n\n\nclass TranslateY(object):\n    def __init__(self, fillcolor=(128, 128, 128)):\n        self.fillcolor = fillcolor\n\n    def __call__(self, x, magnitude):\n        return x.transform(\n            x.size, Image.AFFINE, (1, 0, 0, 0, 1, magnitude * x.size[1] * random.choice([-1, 1])),\n            fillcolor=self.fillcolor)\n\n\nclass Rotate(object):\n    # from https://stackoverflow.com/questions/\n    # 5252170/specify-image-filling-color-when-rotating-in-python-with-pil-and-setting-expand\n    def __call__(self, x, magnitude):\n        rot = x.convert(\"RGBA\").rotate(magnitude * random.choice([-1, 1]))\n        return Image.composite(rot, Image.new(\"RGBA\", rot.size, (128,) * 4), rot).convert(x.mode)\n\n\nclass Color(object):\n    def __call__(self, x, magnitude):\n        return ImageEnhance.Color(x).enhance(1 + magnitude * random.choice([-1, 1]))\n\n\nclass Posterize(object):\n    def __call__(self, x, magnitude):\n        return ImageOps.posterize(x, magnitude)\n\n\nclass Solarize(object):\n    def __call__(self, x, magnitude):\n        return ImageOps.solarize(x, magnitude)\n\n\nclass Contrast(object):\n    def __call__(self, x, magnitude):\n        return ImageEnhance.Contrast(x).enhance(1 + magnitude * random.choice([-1, 1]))\n\n\nclass Sharpness(object):\n    def __call__(self, x, magnitude):\n        return ImageEnhance.Sharpness(x).enhance(1 + magnitude * random.choice([-1, 1]))\n\n\nclass Brightness(object):\n    def __call__(self, x, magnitude):\n        return ImageEnhance.Brightness(x).enhance(1 + magnitude * random.choice([-1, 1]))\n\n\nclass AutoContrast(object):\n    def __call__(self, x, magnitude):\n        return ImageOps.autocontrast(x)\n\n\nclass Equalize(object):\n    def __call__(self, x, magnitude):\n        return ImageOps.equalize(x)\n\n\nclass Invert(object):\n    def __call__(self, x, magnitude):\n        return ImageOps.invert(x)\n"
  },
  {
    "path": "transopt/benchmark/HPO/misc.py",
    "content": "import math\nimport hashlib\nimport sys\nfrom collections import OrderedDict\nfrom numbers import Number\nimport operator\n\nimport numpy as np\nimport torch\nfrom collections import Counter\nfrom itertools import cycle\nimport matplotlib.pyplot as plt\n\n\n\n\nclass _SplitDataset(torch.utils.data.Dataset):\n    \"\"\"Used by split_dataset\"\"\"\n    def __init__(self, underlying_dataset, keys):\n        super(_SplitDataset, self).__init__()\n        self.underlying_dataset = underlying_dataset\n        self.keys = keys\n    def __getitem__(self, key):\n        return self.underlying_dataset[self.keys[key]]\n    def __len__(self):\n        return len(self.keys)\n\ndef split_dataset(dataset, n, seed=0):\n    \"\"\"\n    Return a pair of datasets corresponding to a random split of the given\n    dataset, with n datapoints in the first dataset and the rest in the last,\n    using the given random seed\n    \"\"\"\n    assert(n <= len(dataset))\n    keys = list(range(len(dataset)))\n    np.random.RandomState(seed).shuffle(keys)\n    keys_1 = keys[:n]\n    keys_2 = keys[n:]\n    return _SplitDataset(dataset, keys_1), _SplitDataset(dataset, keys_2)\n\n\ndef accuracy(network, loader, device):\n    correct = 0\n    total = 0\n    weights_offset = 0\n\n    network.eval()\n    with torch.no_grad():\n        for x, y in loader:\n            x = x.to(device)\n            y = y.to(device)\n            p = network.predict(x)\n            if p.size(1) == 1:\n                correct += (p.gt(0).eq(y).float()).sum().item()\n            else:\n                correct += (p.argmax(1).eq(y).float()).sum().item()\n            total += torch.ones(len(x)).sum().item()\n    network.train()\n\n    return correct / total\n\n\n\ndef print_row(row, colwidth=10, latex=False):\n    if latex:\n        sep = \" & \"\n        end_ = \"\\\\\\\\\"\n    else:\n        sep = \"  \"\n        end_ = \"\"\n\n    def format_val(x):\n        if np.issubdtype(type(x), np.floating):\n            x = \"{:.10f}\".format(x)\n        return str(x).ljust(colwidth)[:colwidth]\n    print(sep.join([format_val(x) for x in row]), end_)\n    \n    \n    \nclass LossPlotter:\n    def __init__(self):\n        self.classification_losses = []  # 用于存储分类损失\n        self.reconstruction_losses = []  # 用于存储重构损失\n        self.epochs = []  # 用于存储训练的 epoch 数\n        self.cur = 0\n\n        # 初始化绘图\n        plt.ion()  # 开启交互模式\n        self.fig, self.ax = plt.subplots(figsize=(10, 5))\n\n    def update(self, classification_loss, reconstruction_loss):\n        # 更新损失和 epoch 数据\n        self.cur += 1\n        self.classification_losses.append(classification_loss)\n        self.reconstruction_losses.append(reconstruction_loss)\n        self.epochs.append(self.cur)\n\n        # 清空当前的图像\n        self.ax.clear()\n\n        # 绘制分类损失曲线\n        self.ax.plot(self.epochs, self.classification_losses, label='Classification Loss', color='blue', marker='o')\n        \n        # 绘制重构损失曲线\n        self.ax.plot(self.epochs, self.reconstruction_losses, label='Reconstruction Loss', color='orange', marker='x')\n\n        # 设置图表标题和标签\n        self.ax.set_title('Loss Curves')\n        self.ax.set_xlabel('Epoch')\n        self.ax.set_ylabel('Loss')\n\n        # 显示图例\n        self.ax.legend()\n\n        # 更新图表\n        plt.draw()\n        plt.pause(0.01)  # 暂停以便更新图像\n\n    def show(self):\n        # 展示最终图像并关闭交互模式\n        plt.ioff()\n        plt.savefig('loss_curves.png')"
  },
  {
    "path": "transopt/benchmark/HPO/networks.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nimport copy\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\nimport torchvision.models\n\n\nSUPPORTED_ARCHITECTURES = {\n    'resnet': [18, 34, 50, 101],\n    'densenet': [121, 169, 201],\n    'wideresnet': [16, 22, 28, 40],\n    'alexnet': [1],\n    'cnn': [1]\n}\n\ndef Featurizer(input_shape, architecture, model_size, hparams):\n    \"\"\"Select an appropriate featurizer based on the input shape and hparams.\"\"\"\n\n    if architecture == 'densenet':\n        return DenseNet(input_shape, model_size, hparams)\n    elif architecture == 'resnet':\n        return ResNet(input_shape, model_size, hparams)\n    elif architecture == 'wideresnet':\n        return WideResNet(input_shape, model_size, hparams)\n    elif architecture == 'alexnet':\n        return AlexNet(input_shape, hparams)\n    elif architecture == 'cnn':\n        return CNN(input_shape, hparams)\n    else:\n        raise ValueError(f\"Unsupported network architecture: {architecture}\")\n    \nclass Identity(nn.Module):\n    \"\"\"An identity layer\"\"\"\n    def __init__(self):\n        super(Identity, self).__init__()\n\n    def forward(self, x):\n        return x\n\n\nclass MLP(nn.Module):\n    \"\"\"Just  an MLP\"\"\"\n    def __init__(self, n_inputs, n_outputs, hparams):\n        super(MLP, self).__init__()\n        self.input = nn.Linear(n_inputs, hparams['mlp_width'])\n        self.dropout = nn.Dropout(hparams['dropout_rate'])\n        self.hiddens = nn.ModuleList([\n            nn.Linear(hparams['mlp_width'], hparams['mlp_width'])\n            for _ in range(hparams['mlp_depth']-2)])\n        self.output = nn.Linear(hparams['mlp_width'], n_outputs)\n        self.n_outputs = n_outputs\n\n    def forward(self, x):\n        x = self.input(x)\n        x = self.dropout(x)\n        x = F.relu(x)\n        for hidden in self.hiddens:\n            x = hidden(x)\n            x = self.dropout(x)\n            x = F.relu(x)\n        # x = self.output(x)\n        return x\n\nclass ResNet(torch.nn.Module):\n    \"\"\"ResNet with the softmax chopped off and the batchnorm frozen\"\"\"\n    def __init__(self, input_shape, model_size, hparams):\n        super(ResNet, self).__init__()\n        if model_size == 18:\n            self.network = torchvision.models.resnet18(weights=torchvision.models.ResNet18_Weights.IMAGENET1K_V1)\n            self.n_outputs = 512\n        elif model_size == 101:\n            self.network = torchvision.models.resnet101(weights=torchvision.models.ResNet101_Weights.IMAGENET1K_V2)\n            self.n_outputs = 2048\n        elif model_size == 34:\n            self.network = torchvision.models.resnet34(weights=torchvision.models.ResNet34_Weights.IMAGENET1K_V1)\n            self.n_outputs = 512\n        elif model_size == 50:\n            self.network = torchvision.models.resnet50(weights=torchvision.models.ResNet50_Weights.IMAGENET1K_V2)\n            self.n_outputs = 2048\n        else:\n            raise ValueError(f\"Unsupported ResNet model size: {model_size}\")\n        \n\n        # adapt number of channels\n        nc = input_shape[0]\n        if nc != 3:\n            tmp = self.network.conv1.weight.data.clone()\n\n            self.network.conv1 = nn.Conv2d(\n                nc, 64, kernel_size=(7, 7),\n                stride=(2, 2), padding=(3, 3), bias=False)\n\n            for i in range(nc):\n                self.network.conv1.weight.data[:, i, :, :] = tmp[:, i % 3, :, :]\n\n        # save memory\n        del self.network.fc\n        self.network.fc = Identity()\n\n        self.freeze_bn()\n        self.hparams = hparams\n\n    def forward(self, x):\n        \"\"\"Encode x into a feature vector of size n_outputs.\"\"\"\n        return self.network(x)\n\n    def train(self, mode=True):\n        \"\"\"\n        Override the default train() to freeze the BN parameters\n        \"\"\"\n        super().train(mode)\n        self.freeze_bn()\n\n    def freeze_bn(self):\n        for m in self.network.modules():\n            if isinstance(m, nn.BatchNorm2d):\n                m.eval()\n\n\ndef conv3x3(in_planes, out_planes, stride=1):\n    return nn.Conv2d(\n        in_planes,\n        out_planes,\n        kernel_size=3,\n        stride=stride,\n        padding=1,\n        bias=True)\n\n\ndef conv_init(m):\n    classname = m.__class__.__name__\n    if classname.find('Conv') != -1:\n        init.xavier_uniform_(m.weight, gain=np.sqrt(2))\n        init.constant_(m.bias, 0)\n    elif classname.find('BatchNorm') != -1:\n        init.constant_(m.weight, 1)\n        init.constant_(m.bias, 0)\n\n\nclass wide_basic(nn.Module):\n    def __init__(self, in_planes, planes, dropout_rate, stride=1):\n        super(wide_basic, self).__init__()\n        self.bn1 = nn.BatchNorm2d(in_planes)\n        self.conv1 = nn.Conv2d(\n            in_planes, planes, kernel_size=3, stride=1, padding=1, bias=False)\n        self.dropout = nn.Dropout(p=dropout_rate)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.conv2 = nn.Conv2d(\n            planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)\n\n        self.shortcut = nn.Sequential()\n        if stride != 1 or in_planes != planes:\n            self.shortcut = nn.Sequential(\n                nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride, padding=0, bias=True))\n\n    def forward(self, x):\n        out = self.dropout(self.conv1(F.relu(self.bn1(x))))\n        out = self.conv2(F.relu(self.bn2(out)))\n        out += self.shortcut(x)\n\n        return out\n\n\nclass WideResNet(nn.Module):\n    \"\"\"WideResNet with the softmax layer removed\"\"\"\n    def __init__(self, input_shape, model_size, hparams):\n        super(WideResNet, self).__init__()\n        \n        # Define configurations for different model sizes\n        configs = {\n            28: (28, 10),  # WRN-28-10\n            16: (16, 8),   # WRN-16-8\n            40: (40, 2),   # WRN-40-2\n            22: (22, 2)    # WRN-22-2\n        }\n        \n        if model_size not in configs:\n            raise ValueError(f\"Unsupported model size: {model_size}. Choose from {list(configs.keys())}\")\n        \n        self.depth, self.widen_factor = configs[model_size]\n        self.nChannels = [16, 16*self.widen_factor, 32*self.widen_factor, 64*self.widen_factor]\n        self.in_planes = 16\n\n        assert ((self.depth-4) % 6 == 0), 'Wide-resnet depth should be 6n+4'\n        n = (self.depth-4) // 6\n        self.n_outputs = self.nChannels[3]\n        self.dropout = hparams['dropout_rate']\n\n        self.conv1 = nn.Conv2d(input_shape[0], self.nChannels[0], kernel_size=3, stride=1, padding=1, bias=False)\n        self.layer1 = self._wide_layer(wide_basic, self.nChannels[1], n, self.dropout, stride=1)\n        self.layer2 = self._wide_layer(wide_basic, self.nChannels[2], n, self.dropout, stride=2)\n        self.layer3 = self._wide_layer(wide_basic, self.nChannels[3], n, self.dropout, stride=2)\n        self.bn1 = nn.BatchNorm2d(self.nChannels[3], momentum=0.9)\n    \n\n    def _wide_layer(self, block, planes, num_blocks, dropout, stride):\n        strides = [stride] + [1]*(num_blocks-1)\n        layers = []\n\n        for stride in strides:\n            layers.append(block(self.in_planes, planes, dropout, stride))\n            self.in_planes = planes\n\n        return nn.Sequential(*layers)\n\n    def forward(self, x):\n        out = self.conv1(x)\n        out = self.layer1(out)\n        out = self.layer2(out)\n        out = self.layer3(out)\n        out = F.relu(self.bn1(out))\n        out = F.avg_pool2d(out, 8)\n        out = out.view(out.size(0), -1)\n        return out\n\n\nclass DenseNet(nn.Module):\n    \"\"\"DenseNet with the softmax layer removed\"\"\"\n    def __init__(self, input_shape, model_size, hparams):\n        super(DenseNet, self).__init__()\n        self.model_size = model_size\n        if self.model_size == 121:\n            self.network = torchvision.models.densenet121(weights=torchvision.models.DenseNet121_Weights.IMAGENET1K_V1)\n            self.n_outputs = 1024\n        elif self.model_size == 169:\n            self.network = torchvision.models.densenet169(weights=torchvision.models.DenseNet169_Weights.IMAGENET1K_V1)\n            self.n_outputs = 1664\n        elif self.model_size == 201:\n            self.network = torchvision.models.densenet201(weights=torchvision.models.DenseNet201_Weights.IMAGENET1K_V1)\n            self.n_outputs = 1920\n        else:\n            raise ValueError(\"Unsupported DenseNet depth. Choose from 121, 169, or 201.\")\n\n        # Adapt number of channels\n        nc = input_shape[0]\n        if nc != 3:\n            self.network.features.conv0 = nn.Conv2d(nc, 64, kernel_size=7, stride=2, padding=3, bias=False)\n\n        # Remove the last fully connected layer\n        self.network.classifier = Identity()\n\n        # self.dropout = nn.Dropout(hparams['dropout_rate'])\n\n    def forward(self, x):\n        features = self.network(x)\n        return features\n    \n    \n    \n\nclass ht_CNN(nn.Module):\n    \"\"\"\n    Hand-tuned architecture for MNIST.\n    Weirdness I've noticed so far with this architecture:\n    - adding a linear layer after the mean-pool in features hurts\n        RotatedMNIST-100 generalization severely.\n    \"\"\"\n    n_outputs = 128\n\n    def __init__(self, input_shape):\n        super(ht_CNN, self).__init__()\n        self.conv1 = nn.Conv2d(input_shape[0], 64, 3, 1, padding=1)\n        self.conv2 = nn.Conv2d(64, 128, 3, stride=2, padding=1)\n        self.conv3 = nn.Conv2d(128, 128, 3, 1, padding=1)\n        self.conv4 = nn.Conv2d(128, 128, 3, 1, padding=1)\n\n        self.bn0 = nn.GroupNorm(8, 64)\n        self.bn1 = nn.GroupNorm(8, 128)\n        self.bn2 = nn.GroupNorm(8, 128)\n        self.bn3 = nn.GroupNorm(8, 128)\n\n        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = F.relu(x)\n        x = self.bn0(x)\n\n        x = self.conv2(x)\n        x = F.relu(x)\n        x = self.bn1(x)\n\n        x = self.conv3(x)\n        x = F.relu(x)\n        x = self.bn2(x)\n\n        x = self.conv4(x)\n        x = F.relu(x)\n        x = self.bn3(x)\n\n        x = self.avgpool(x)\n        x = x.view(len(x), -1)\n        return x\n\nclass CNN(nn.Module):\n    \"\"\"\n    Two-layer CNN with hidden dimensions determined by hparams.\n    \"\"\"\n    def __init__(self, input_shape, hparams):\n        super(CNN, self).__init__()\n        self.conv1 = nn.Conv2d(input_shape[0], hparams['hidden_dim1'], 3, 1, padding=1)\n        self.conv2 = nn.Conv2d(hparams['hidden_dim1'], hparams['hidden_dim2'], 3, 1, padding=1)\n        \n        self.bn1 = nn.BatchNorm2d(hparams['hidden_dim1'])\n        self.bn2 = nn.BatchNorm2d(hparams['hidden_dim2'])\n        \n        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))\n        \n        self.n_outputs = hparams['hidden_dim2']\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = F.relu(x)\n        x = self.bn1(x)\n\n        x = self.conv2(x)\n        x = F.relu(x)\n        x = self.bn2(x)\n\n        x = self.avgpool(x)\n        x = x.view(len(x), -1)\n        return x\n\n\nclass ContextNet(nn.Module):\n    def __init__(self, input_shape):\n        super(ContextNet, self).__init__()\n\n        # Keep same dimensions\n        padding = (5 - 1) // 2\n        self.context_net = nn.Sequential(\n            nn.Conv2d(input_shape[0], 64, 5, padding=padding),\n            nn.BatchNorm2d(64),\n            nn.ReLU(),\n            nn.Conv2d(64, 64, 5, padding=padding),\n            nn.BatchNorm2d(64),\n            nn.ReLU(),\n            nn.Conv2d(64, 1, 5, padding=padding),\n        )\n\n    def forward(self, x):\n        return self.context_net(x)\n\nclass AlexNet(nn.Module):\n    \"\"\"AlexNet with the classifier layer removed\"\"\"\n    def __init__(self, input_shape, hparams):\n        super(AlexNet, self).__init__()\n        self.input_shape = input_shape\n        self.hparams = hparams\n        \n        self.features = nn.Sequential(\n            nn.Conv2d(input_shape[0], 64, kernel_size=3, stride=2, padding=1),\n            nn.ReLU(inplace=True),\n            nn.MaxPool2d(kernel_size=2),\n            nn.Conv2d(64, 192, kernel_size=3, padding=1),\n            nn.ReLU(inplace=True),\n            nn.MaxPool2d(kernel_size=2),\n            nn.Conv2d(192, 384, kernel_size=3, padding=1),\n            nn.ReLU(inplace=True),\n            nn.Conv2d(384, 256, kernel_size=3, padding=1),\n            nn.ReLU(inplace=True),\n            nn.Conv2d(256, 256, kernel_size=3, padding=1),\n            nn.ReLU(inplace=True),\n            nn.MaxPool2d(kernel_size=2),\n        )\n        self.avgpool = nn.AdaptiveAvgPool2d((6, 6))\n\n        # Calculate the correct n_outputs\n        with torch.no_grad():\n            dummy_input = torch.zeros(1, *input_shape)\n            features_output = self.features(dummy_input)\n            avgpool_output = self.avgpool(features_output)\n            self.n_outputs = avgpool_output.view(avgpool_output.size(0), -1).shape[1]\n\n    def forward(self, x):\n        x = self.features(x)\n        x = self.avgpool(x)\n        x = x.view(x.size(0), -1)\n        return x\n\n\n\ndef Classifier(in_features, out_features, dropout=0.5, is_nonlinear=False):\n    if is_nonlinear:\n        hidden1 = max(in_features // 2, 64)  # Ensure at least 64 neurons\n        hidden2 = max(hidden1 // 2, 32)      # Ensure at least 32 neurons\n        return torch.nn.Sequential(\n            torch.nn.Dropout(p=dropout),\n            torch.nn.Linear(in_features, hidden1),\n            torch.nn.ReLU(),\n            torch.nn.Dropout(p=dropout),\n            torch.nn.Linear(hidden1, hidden2),\n            torch.nn.ReLU(),\n            torch.nn.Linear(hidden2, out_features)\n            )\n    else:\n        return torch.nn.Linear(in_features, out_features)\n\n"
  },
  {
    "path": "transopt/benchmark/HPO/test_model.py",
    "content": "import os\nimport torch\nfrom torch.utils.data import DataLoader\nfrom torchvision import transforms\nfrom torchvision.transforms import ToPILImage\nimport matplotlib.pyplot as plt\nfrom transopt.benchmark.HPO import algorithms\nfrom transopt.benchmark.HPO import datasets\n\n# 定义读取模型的路径\nmodel_path = os.path.expanduser('~/transopt_tmp/output/models/ROBERM_RobCifar10_0/model.pkl')\nalgorithm_name = 'ROBERM'\ndataset_name = 'RobCifar10'\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n\nhparams = {\n    'batch_size': 64,\n    'nonlinear_classifier': False,\n    'lr': 0.001,\n    'weight_decay': 0.00001\n}\n\ndataset = vars(datasets)[dataset_name]()\nalgorithm_class = algorithms.get_algorithm_class(algorithm_name)\nalgorithm = algorithm_class(dataset.input_shape, dataset.num_classes, len(dataset), hparams)\n\n# 加载模型\ncheckpoint = torch.load(model_path, map_location=device)  # 加载模型到指定设备\nalgorithm.load_state_dict(checkpoint['model_dict'])\nalgorithm.to(device)\nalgorithm.eval()  # 设置为评估模式\n\n# 定义数据转换（将图像转换为 Tensor）\ntransform = transforms.Compose([\n    transforms.ToTensor(),  # 转换为 Tensor\n])\n\n# 加载 CIFAR-10 测试数据集\ntest_dataset = dataset.test\ntest_loader = DataLoader(test_dataset, batch_size=1, shuffle=True)\n\n# 定义将 Tensor 转换为 PIL 图像的工具\nto_pil_image = ToPILImage()\n\n# 选择一个测试样本并进行预测\nfor images, labels in test_loader:\n    \n    images = images.to(device)\n    labels = labels.to(device)\n\n    with torch.no_grad():  # 禁用梯度计算\n        outputs, reconstructed_images = algorithm.predict(images)  # 确保 predict 返回解码图像\n        _, predicted = torch.max(outputs, 1)\n\n    # 打印预测结果\n    print(f\"Predicted Label: {predicted.item()}\")\n\n    # 将原图和重构图像转换为 PIL 图像\n    original_image_pil = to_pil_image(images.squeeze(0).cpu())  # 原图\n    reconstructed_image_pil = to_pil_image(reconstructed_images.squeeze(0).cpu())  # 重构图\n\n    # 使用 matplotlib 显示原图和重构图的对比\n    fig, axes = plt.subplots(1, 2, figsize=(10, 5))\n\n    # 显示原图\n    axes[0].imshow(original_image_pil)\n    axes[0].set_title('Original Image')\n    axes[0].axis('off')  # 隐藏坐标轴\n\n    # 显示重构图\n    axes[1].imshow(reconstructed_image_pil)\n    axes[1].set_title(f'Reconstructed Image\\nPredicted Label: {predicted.item()}')\n    axes[1].axis('off')  # 隐藏坐标轴\n\n    plt.tight_layout()\n    plt.savefig('rec.png')\n\n    # 仅显示第一个测试样本\n    break"
  },
  {
    "path": "transopt/benchmark/HPO/visualization.py",
    "content": "import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE\nfrom torchvision import datasets, transforms\nfrom torchvision.transforms import AutoAugmentPolicy, AutoAugment, RandAugment\nimport torch\n\ndef get_cifar10_data(transform):\n    dataset = datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)\n    dataloader = torch.utils.data.DataLoader(dataset, batch_size=1000, shuffle=True)\n    images, labels = next(iter(dataloader))\n    return images.numpy().reshape(1000, -1), labels.numpy()\n\n# Define transforms\ntransforms_list = {\n    'No Augmentation': transforms.ToTensor(),\n    'Random Crop': transforms.Compose([\n        transforms.RandomCrop(32, padding=4),\n        transforms.ToTensor(),\n    ]),\n    'Random Horizontal Flip': transforms.Compose([\n        transforms.RandomHorizontalFlip(),\n        transforms.ToTensor(),\n    ]),\n    'Color Jitter': transforms.Compose([\n        transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.1),\n        transforms.ToTensor(),\n    ]),\n    'Brightness': transforms.Compose([\n        transforms.ColorJitter(brightness=0.5),\n        transforms.ToTensor(),\n    ]),\n    'Solarize': transforms.Compose([\n        transforms.RandomSolarize(threshold=128),\n        transforms.ToTensor(),\n    ]),\n    'Shear': transforms.Compose([\n        transforms.RandomAffine(degrees=0, shear=15),\n        transforms.ToTensor(),\n    ]),\n}\n\n# Prepare data for all transforms\nall_data = []\nall_labels = []\nfor name, transform in transforms_list.items():\n    print(f\"Processing {name}...\")\n    data, labels = get_cifar10_data(transform)\n    all_data.append(data)\n    all_labels.append(np.full(labels.shape, list(transforms_list.keys()).index(name)))\n\n# Combine all data\ncombined_data = np.vstack(all_data)\ncombined_labels = np.hstack(all_labels)\n\n# Perform t-SNE on combined data\ntsne = TSNE(n_components=2, random_state=42)\ntsne_results = tsne.fit_transform(combined_data)\n\n# Visualize results\nplt.figure(figsize=(16, 16))\nscatter = plt.scatter(tsne_results[:, 0], tsne_results[:, 1], c=combined_labels, cmap='tab10')\nplt.title('t-SNE Visualization of CIFAR-10 with Different Augmentations')\nplt.xlabel('t-SNE feature 1')\nplt.ylabel('t-SNE feature 2')\n\n# Add legend\nlegend_elements = [plt.Line2D([0], [0], marker='o', color='w', label=method, \n                   markerfacecolor=plt.cm.tab10(i/len(transforms_list)), markersize=10)\n                   for i, method in enumerate(transforms_list.keys())]\nplt.legend(handles=legend_elements, title='Augmentation Methods', loc='center left', bbox_to_anchor=(1, 0.5))\n\nplt.tight_layout()\nplt.savefig('cifar10_augmentations_tsne.png', dpi=300, bbox_inches='tight')\nplt.show()\n\nprint(\"Visualization complete. Check the output image: cifar10_augmentations_tsne.png\")"
  },
  {
    "path": "transopt/benchmark/HPO/wide_resnet.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\n\"\"\"\nFrom https://github.com/meliketoy/wide-resnet.pytorch\n\"\"\"\n\nimport sys\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\nfrom torch.autograd import Variable\n\n\ndef conv3x3(in_planes, out_planes, stride=1):\n    return nn.Conv2d(\n        in_planes,\n        out_planes,\n        kernel_size=3,\n        stride=stride,\n        padding=1,\n        bias=True)\n\n\ndef conv_init(m):\n    classname = m.__class__.__name__\n    if classname.find('Conv') != -1:\n        init.xavier_uniform_(m.weight, gain=np.sqrt(2))\n        init.constant_(m.bias, 0)\n    elif classname.find('BatchNorm') != -1:\n        init.constant_(m.weight, 1)\n        init.constant_(m.bias, 0)\n\n\nclass wide_basic(nn.Module):\n    def __init__(self, in_planes, planes, dropout_rate, stride=1):\n        super(wide_basic, self).__init__()\n        self.bn1 = nn.BatchNorm2d(in_planes)\n        self.conv1 = nn.Conv2d(\n            in_planes, planes, kernel_size=3, padding=1, bias=True)\n        self.dropout = nn.Dropout(p=dropout_rate)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.conv2 = nn.Conv2d(\n            planes, planes, kernel_size=3, stride=stride, padding=1, bias=True)\n\n        self.shortcut = nn.Sequential()\n        if stride != 1 or in_planes != planes:\n            self.shortcut = nn.Sequential(\n                nn.Conv2d(\n                    in_planes, planes, kernel_size=1, stride=stride,\n                    bias=True), )\n\n    def forward(self, x):\n        out = self.dropout(self.conv1(F.relu(self.bn1(x))))\n        out = self.conv2(F.relu(self.bn2(out)))\n        out += self.shortcut(x)\n\n        return out\n\n\nclass Wide_ResNet(nn.Module):\n    \"\"\"Wide Resnet with the softmax layer chopped off\"\"\"\n    def __init__(self, input_shape, depth, widen_factor, dropout_rate):\n        super(Wide_ResNet, self).__init__()\n        self.in_planes = 16\n\n        assert ((depth - 4) % 6 == 0), 'Wide-resnet depth should be 6n+4'\n        n = (depth - 4) / 6\n        k = widen_factor\n\n        # print('| Wide-Resnet %dx%d' % (depth, k))\n        nStages = [16, 16 * k, 32 * k, 64 * k]\n\n        self.conv1 = conv3x3(input_shape[0], nStages[0])\n        self.layer1 = self._wide_layer(\n            wide_basic, nStages[1], n, dropout_rate, stride=1)\n        self.layer2 = self._wide_layer(\n            wide_basic, nStages[2], n, dropout_rate, stride=2)\n        self.layer3 = self._wide_layer(\n            wide_basic, nStages[3], n, dropout_rate, stride=2)\n        self.bn1 = nn.BatchNorm2d(nStages[3], momentum=0.9)\n\n        self.n_outputs = nStages[3]\n\n    def _wide_layer(self, block, planes, num_blocks, dropout_rate, stride):\n        strides = [stride] + [1] * (int(num_blocks) - 1)\n        layers = []\n\n        for stride in strides:\n            layers.append(block(self.in_planes, planes, dropout_rate, stride))\n            self.in_planes = planes\n\n        return nn.Sequential(*layers)\n\n    def forward(self, x):\n        out = self.conv1(x)\n        out = self.layer1(out)\n        out = self.layer2(out)\n        out = self.layer3(out)\n        out = F.relu(self.bn1(out))\n        out = F.avg_pool2d(out, 8)\n        return out[:, :, 0, 0]\n"
  },
  {
    "path": "transopt/benchmark/HPOB/HpobBench.py",
    "content": "import copy\n\nimport numpy as np\nimport json\n\nimport os\nimport matplotlib.pyplot as plt\nos.environ['OMP_NUM_THREADS'] = \"1\"\n\n\n##/mnt/data/cola/hpob-data\nclass HPOb():\n    def __init__(self, search_space_id, data_set_id, xdim, path='./Benchmark/HPOB/hpob-data'):\n        self.name = f'HPOb_{xdim}d_{data_set_id}'\n        self.search_space_id = search_space_id\n        self.data_set_id = data_set_id\n        self.xdim = xdim\n        self.query_num = 0\n        self.task_type = 'Tabular'\n        with open(path + \"/meta-test-dataset.json\", \"r\") as f:\n            data_set = json.load(f)\n            data_set = data_set[search_space_id][data_set_id]\n\n        self.data_set = data_set\n        self.RX = [[0,1] for i in range(xdim)]\n        self.bounds = np.array([[-1.0] * self.xdim, [1.0] * self.xdim])\n\n        self.unobserved_indexs = list(range(len(data_set['y'])))\n        self.observed_indexs = []\n\n        self.data_input = {index: value for index, value in enumerate(data_set['X'])}\n        self.data_output ={index: value for index, value in enumerate(data_set['y'])}\n        self.unobserved_input = {index: value for index, value in enumerate(data_set['X'])}\n        self.unobserved_output = {index: value for index, value in enumerate(data_set['y'])}\n\n\n\n\n    def transfer(self, X):\n        return (X + 1) * (self.RX[:, 1] - self.RX[:, 0]) / 2 + (self.RX[:, 0])\n\n    def normalize(self, X):\n        return 2 * (X - (self.RX[:, 0])) / (self.RX[:, 1] - self.RX[:, 0]) - 1\n\n    def data_num(self):\n        return len(self.unobserved_output)\n\n    def get_var(self, indexs):\n        X = [self.unobserved_input[idx] for idx in indexs]\n        return  np.array(X)\n\n    def get_idx(self, vars):\n        unob_idx = []\n        vars = np.array(vars)\n        for var in vars:\n            for idx in self.unobserved_indexs:\n                if np.all(var == self.unobserved_input[idx]):\n                    unob_idx.append(idx)\n\n        return  unob_idx\n\n    def get_all_unobserved_var(self):\n        return np.array(list(self.unobserved_input.values()))\n\n    def get_all_unobserved_idxs(self):\n        return self.unobserved_indexs\n\n    def f(self,X, indexs):\n        self.query_num += len(indexs)\n        y = []\n        for idx in indexs:\n            y.append(self.unobserved_output[idx][0])\n            del self.unobserved_output[idx]\n            del self.unobserved_input[idx]\n            self.unobserved_indexs.remove(idx)\n            self.observed_indexs.append(idx)\n        f = np.array(y)\n        return f\n\n\n\n\ndataset_dic = {'4796': ['3549', '3918', '9903', '23'],\n               '5527': ['146064', '146065', '9914', '145804', '31', '10101'],\n               '5636': ['146064', '145804', '9914', '146065', '10101', '31'],\n               '5859': ['9983', '31', '37', '3902', '9977', '125923'], '5860': ['14965', '9976', '3493'],\n               '5891': ['9889', '3899', '6566', '9980', '3891', '3492'], '5906': ['9971', '3918'],\n               '5965': ['145836', '9914', '3903', '10101', '9889', '49', '9946'],\n               '5970': ['37', '3492', '9952', '49', '34536', '14951'],\n               '5971': ['10093', '3954', '43', '34536', '9970', '6566'],\n               '6766': ['3903', '146064', '145953', '145804', '31', '10101'],\n               '6767': ['146065', '145804', '146064', '9914', '9967', '31'],\n               '6794': ['145804', '3', '146065', '10101', '9914', '31'],\n               '7607': ['14965', '145976', '3896', '3913', '3903', '9946', '9967'],\n               '7609': ['145854', '3903', '9967', '145853', '34537', '125923', '145878'],\n               '5889': ['9971', '3918']}\n\ndef calculate_correlation(x1, y1, X2, Y2):\n    # 计算x1与X2之间的欧氏距离\n    distances = cdist(x1.reshape(1, -1), X2, metric='euclidean')\n\n    # 找到X2中最近的点的索引\n    closest_index = np.argmin(distances)\n\n    # 使用pair-wise统计方法计算相关性\n    correlation = np.corrcoef(y1, Y2[closest_index].flatten())[0, 1]\n\n    return correlation\n\n\n\nif __name__ == '__main__':\n    search_space_id = '6794'\n    for data_set_id in dataset_dic[search_space_id]:\n        hpo = HPOb(search_space_id=search_space_id, data_set_id=data_set_id, xdim=10,path='./hpob-data')\n        data_x = np.array(hpo.data_set['X'])\n        data_y = hpo.data_set['y']\n\n        # 对Y进行排序，获取排序后的索引\n        sorted_indices = np.argsort(data_y, axis=0)\n\n        # 根据排序后的索引重新排列X\n        sorted_X = data_x[sorted_indices[:,0]]\n        # 绘制heatmap图\n        plt.figure(figsize=(8, 6))\n        plt.imshow(sorted_X, aspect='auto', cmap='viridis')\n        plt.colorbar()\n        plt.title('Heatmap of Sorted X')\n        plt.xlabel('Features')\n        plt.ylabel('Samples (Sorted by Y)')\n        plt.savefig(f'heatmap of sorted X for dataset:{data_set_id}')\n"
  },
  {
    "path": "transopt/benchmark/HPOB/plot.py",
    "content": "import matplotlib.pyplot as plt\nimport numpy as np\n\n# 假设你有一个N*1的数组\na = [0.62559]*1000\nb = [0.31532]*50\nc = [0.22537] * 20\ndata = np.array([a,b,c])\ncolors = ['red', 'green', 'blue']\n# 绘制箱线图\nplt.bar(x = [0.62559, 0.31532, 0.22537], height=[1000,50,20],width = 0.05, color=colors)\n\n\n# 添加横纵轴标签和标题\nplt.xlabel('Value')\nplt.ylabel('NUmber of Value')\nplt.title('Distribution of HPOBench-B Surrogate model')\nplt.savefig('toy')\n"
  },
  {
    "path": "transopt/benchmark/HPOOOD/algorithms.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.autograd as autograd\n\nimport copy\nimport numpy as np\nfrom collections import OrderedDict\n\n\nfrom transopt.benchmark.HPOOOD import networks\nfrom transopt.benchmark.HPOOOD.misc import (\n    random_pairs_of_minibatches, split_meta_train_test, ParamDict,\n    MovingAverage, l2_between_dicts, proj, Nonparametric, SupConLossLambda\n)\n\n\n\nALGORITHMS = [\n    'ERM',\n    'Fish',\n    'IRM',\n    'GroupDRO',\n    'Mixup',\n    'MLDG',\n    'CORAL',\n    'MMD',\n    'DANN',\n    'CDANN',\n    'MTL',\n    'SagNet',\n    'ARM',\n    'VREx',\n    'RSC',\n    'SD',\n    'ANDMask',\n    'SANDMask',\n    'IGA',\n    'SelfReg',\n    \"Fishr\",\n    'TRM',\n    'IB_ERM',\n    'IB_IRM',\n    'CAD',\n    'CondCAD',\n    'Transfer',\n    'CausIRL_CORAL',\n    'CausIRL_MMD',\n    'EQRM',\n    'RDM',\n    'ADRMX',\n]\n\ndef get_algorithm_class(algorithm_name):\n    \"\"\"Return the algorithm class with the given name.\"\"\"\n    if algorithm_name not in globals():\n        raise NotImplementedError(\"Algorithm not found: {}\".format(algorithm_name))\n    return globals()[algorithm_name]\n\nclass Algorithm(torch.nn.Module):\n    \"\"\"\n    A subclass of Algorithm implements a domain generalization algorithm.\n    Subclasses should implement the following:\n    - update()\n    - predict()\n    \"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(Algorithm, self).__init__()\n        self.hparams = hparams\n\n    def update(self, minibatches, unlabeled=None):\n        \"\"\"\n        Perform one update step, given a list of (x, y) tuples for all\n        environments.\n\n        Admits an optional list of unlabeled minibatches from the test domains,\n        when task is domain_adaptation.\n        \"\"\"\n        raise NotImplementedError\n\n    def predict(self, x):\n        raise NotImplementedError\n\nclass ERM(Algorithm):\n    \"\"\"\n    Empirical Risk Minimization (ERM)\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(ERM, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams)\n        self.featurizer = networks.Featurizer(input_shape, self.hparams)\n        self.classifier = networks.Classifier(\n            self.featurizer.n_outputs,\n            num_classes,\n            self.hparams['nonlinear_classifier'])\n\n        self.network = nn.Sequential(self.featurizer, self.classifier)\n        self.optimizer = torch.optim.Adam(\n            self.network.parameters(),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay'],\n        )\n\n    def update(self, minibatches, unlabeled=None):\n        all_x = torch.cat([x for x, y in minibatches])\n        all_y = torch.cat([y for x, y in minibatches])\n        loss = F.cross_entropy(self.predict(all_x), all_y)\n\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        return {'loss': loss.item()}\n\n    def predict(self, x):\n        return self.network(x)\n\n\nclass Fish(Algorithm):\n    \"\"\"\n    Implementation of Fish, as seen in Gradient Matching for Domain\n    Generalization, Shi et al. 2021.\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(Fish, self).__init__(input_shape, num_classes, num_domains,\n                                   hparams)\n        self.input_shape = input_shape\n        self.num_classes = num_classes\n\n        self.network = networks.WholeFish(input_shape, num_classes, hparams)\n        self.optimizer = torch.optim.Adam(\n            self.network.parameters(),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay']\n        )\n        self.optimizer_inner_state = None\n\n    def create_clone(self, device):\n        self.network_inner = networks.WholeFish(self.input_shape, self.num_classes, self.hparams,\n                                            weights=self.network.state_dict()).to(device)\n        self.optimizer_inner = torch.optim.Adam(\n            self.network_inner.parameters(),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay']\n        )\n        if self.optimizer_inner_state is not None:\n            self.optimizer_inner.load_state_dict(self.optimizer_inner_state)\n\n    def fish(self, meta_weights, inner_weights, lr_meta):\n        meta_weights = ParamDict(meta_weights)\n        inner_weights = ParamDict(inner_weights)\n        meta_weights += lr_meta * (inner_weights - meta_weights)\n        return meta_weights\n\n    def update(self, minibatches, unlabeled=None):\n        self.create_clone(minibatches[0][0].device)\n\n        for x, y in minibatches:\n            loss = F.cross_entropy(self.network_inner(x), y)\n            self.optimizer_inner.zero_grad()\n            loss.backward()\n            self.optimizer_inner.step()\n\n        self.optimizer_inner_state = self.optimizer_inner.state_dict()\n        meta_weights = self.fish(\n            meta_weights=self.network.state_dict(),\n            inner_weights=self.network_inner.state_dict(),\n            lr_meta=self.hparams[\"meta_lr\"]\n        )\n        self.network.reset_weights(meta_weights)\n\n        return {'loss': loss.item()}\n\n    def predict(self, x):\n        return self.network(x)\n\n\nclass ARM(ERM):\n    \"\"\" Adaptive Risk Minimization (ARM) \"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        original_input_shape = input_shape\n        input_shape = (1 + original_input_shape[0],) + original_input_shape[1:]\n        super(ARM, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams)\n        self.context_net = networks.ContextNet(original_input_shape)\n        self.support_size = hparams['batch_size']\n\n    def predict(self, x):\n        batch_size, c, h, w = x.shape\n        if batch_size % self.support_size == 0:\n            meta_batch_size = batch_size // self.support_size\n            support_size = self.support_size\n        else:\n            meta_batch_size, support_size = 1, batch_size\n        context = self.context_net(x)\n        context = context.reshape((meta_batch_size, support_size, 1, h, w))\n        context = context.mean(dim=1)\n        context = torch.repeat_interleave(context, repeats=support_size, dim=0)\n        x = torch.cat([x, context], dim=1)\n        return self.network(x)\n\n\nclass AbstractDANN(Algorithm):\n    \"\"\"Domain-Adversarial Neural Networks (abstract class)\"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains,\n                 hparams, conditional, class_balance):\n\n        super(AbstractDANN, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams)\n\n        self.register_buffer('update_count', torch.tensor([0]))\n        self.conditional = conditional\n        self.class_balance = class_balance\n\n        # Algorithms\n        self.featurizer = networks.Featurizer(input_shape, self.hparams)\n        self.classifier = networks.Classifier(\n            self.featurizer.n_outputs,\n            num_classes,\n            self.hparams['nonlinear_classifier'])\n        self.discriminator = networks.MLP(self.featurizer.n_outputs,\n            num_domains, self.hparams)\n        self.class_embeddings = nn.Embedding(num_classes,\n            self.featurizer.n_outputs)\n\n        # Optimizers\n        self.disc_opt = torch.optim.Adam(\n            (list(self.discriminator.parameters()) +\n                list(self.class_embeddings.parameters())),\n            lr=self.hparams[\"lr_d\"],\n            weight_decay=self.hparams['weight_decay_d'],\n            betas=(self.hparams['beta1'], 0.9))\n\n        self.gen_opt = torch.optim.Adam(\n            (list(self.featurizer.parameters()) +\n                list(self.classifier.parameters())),\n            lr=self.hparams[\"lr_g\"],\n            weight_decay=self.hparams['weight_decay_g'],\n            betas=(self.hparams['beta1'], 0.9))\n\n    def update(self, minibatches, unlabeled=None):\n        device = \"cuda\" if minibatches[0][0].is_cuda else \"cpu\"\n        self.update_count += 1\n        all_x = torch.cat([x for x, y in minibatches])\n        all_y = torch.cat([y for x, y in minibatches])\n        all_z = self.featurizer(all_x)\n        if self.conditional:\n            disc_input = all_z + self.class_embeddings(all_y)\n        else:\n            disc_input = all_z\n        disc_out = self.discriminator(disc_input)\n        disc_labels = torch.cat([\n            torch.full((x.shape[0], ), i, dtype=torch.int64, device=device)\n            for i, (x, y) in enumerate(minibatches)\n        ])\n\n        if self.class_balance:\n            y_counts = F.one_hot(all_y).sum(dim=0)\n            weights = 1. / (y_counts[all_y] * y_counts.shape[0]).float()\n            disc_loss = F.cross_entropy(disc_out, disc_labels, reduction='none')\n            disc_loss = (weights * disc_loss).sum()\n        else:\n            disc_loss = F.cross_entropy(disc_out, disc_labels)\n\n        input_grad = autograd.grad(\n            F.cross_entropy(disc_out, disc_labels, reduction='sum'),\n            [disc_input], create_graph=True)[0]\n        grad_penalty = (input_grad**2).sum(dim=1).mean(dim=0)\n        disc_loss += self.hparams['grad_penalty'] * grad_penalty\n\n        d_steps_per_g = self.hparams['d_steps_per_g_step']\n        if (self.update_count.item() % (1+d_steps_per_g) < d_steps_per_g):\n\n            self.disc_opt.zero_grad()\n            disc_loss.backward()\n            self.disc_opt.step()\n            return {'disc_loss': disc_loss.item()}\n        else:\n            all_preds = self.classifier(all_z)\n            classifier_loss = F.cross_entropy(all_preds, all_y)\n            gen_loss = (classifier_loss +\n                        (self.hparams['lambda'] * -disc_loss))\n            self.disc_opt.zero_grad()\n            self.gen_opt.zero_grad()\n            gen_loss.backward()\n            self.gen_opt.step()\n            return {'gen_loss': gen_loss.item()}\n\n    def predict(self, x):\n        return self.classifier(self.featurizer(x))\n\nclass DANN(AbstractDANN):\n    \"\"\"Unconditional DANN\"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(DANN, self).__init__(input_shape, num_classes, num_domains,\n            hparams, conditional=False, class_balance=False)\n\n\nclass CDANN(AbstractDANN):\n    \"\"\"Conditional DANN\"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(CDANN, self).__init__(input_shape, num_classes, num_domains,\n            hparams, conditional=True, class_balance=True)\n\n\nclass IRM(ERM):\n    \"\"\"Invariant Risk Minimization\"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(IRM, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams)\n        self.register_buffer('update_count', torch.tensor([0]))\n\n    @staticmethod\n    def _irm_penalty(logits, y):\n        device = \"cuda\" if logits[0][0].is_cuda else \"cpu\"\n        scale = torch.tensor(1.).to(device).requires_grad_()\n        loss_1 = F.cross_entropy(logits[::2] * scale, y[::2])\n        loss_2 = F.cross_entropy(logits[1::2] * scale, y[1::2])\n        grad_1 = autograd.grad(loss_1, [scale], create_graph=True)[0]\n        grad_2 = autograd.grad(loss_2, [scale], create_graph=True)[0]\n        result = torch.sum(grad_1 * grad_2)\n        return result\n\n    def update(self, minibatches, unlabeled=None):\n        device = \"cuda\" if minibatches[0][0].is_cuda else \"cpu\"\n        penalty_weight = (self.hparams['irm_lambda'] if self.update_count\n                          >= self.hparams['irm_penalty_anneal_iters'] else\n                          1.0)\n        nll = 0.\n        penalty = 0.\n\n        all_x = torch.cat([x for x, y in minibatches])\n        all_logits = self.network(all_x)\n        all_logits_idx = 0\n        for i, (x, y) in enumerate(minibatches):\n            logits = all_logits[all_logits_idx:all_logits_idx + x.shape[0]]\n            all_logits_idx += x.shape[0]\n            nll += F.cross_entropy(logits, y)\n            penalty += self._irm_penalty(logits, y)\n        nll /= len(minibatches)\n        penalty /= len(minibatches)\n        loss = nll + (penalty_weight * penalty)\n\n        if self.update_count == self.hparams['irm_penalty_anneal_iters']:\n            # Reset Adam, because it doesn't like the sharp jump in gradient\n            # magnitudes that happens at this step.\n            self.optimizer = torch.optim.Adam(\n                self.network.parameters(),\n                lr=self.hparams[\"lr\"],\n                weight_decay=self.hparams['weight_decay'])\n\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        self.update_count += 1\n        return {'loss': loss.item(), 'nll': nll.item(),\n            'penalty': penalty.item()}\n\nclass RDM(ERM):\n    \"\"\"RDM - Domain Generalization via Risk Distribution Matching (https://arxiv.org/abs/2310.18598) \"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(RDM, self).__init__(input_shape, num_classes, num_domains, hparams)\n        self.register_buffer('update_count', torch.tensor([0]))\n\n    def my_cdist(self, x1, x2): \n        x1_norm = x1.pow(2).sum(dim=-1, keepdim=True)\n        x2_norm = x2.pow(2).sum(dim=-1, keepdim=True) \n\n        res = torch.addmm(x2_norm.transpose(-2, -1),\n                          x1,\n                          x2.transpose(-2, -1), alpha=-2).add_(x1_norm)\n        return res.clamp_min_(1e-30)\n\n    def gaussian_kernel(self, x, y, gamma=[0.0001, 0.001, 0.01, 0.1, 1, 10, 100,\n                                           1000]):\n        D = self.my_cdist(x, y)\n        K = torch.zeros_like(D)\n\n        for g in gamma:\n            K.add_(torch.exp(D.mul(-g)))\n\n        return K\n\n    def mmd(self, x, y):\n        Kxx = self.gaussian_kernel(x, x).mean()\n        Kyy = self.gaussian_kernel(y, y).mean()\n        Kxy = self.gaussian_kernel(x, y).mean()\n        return Kxx + Kyy - 2 * Kxy\n    \n    @staticmethod\n    def _moment_penalty(p_mean, q_mean, p_var, q_var):\n        return (p_mean - q_mean) ** 2 + (p_var - q_var) ** 2\n    \n    @staticmethod\n    def _kl_penalty(p_mean, q_mean, p_var, q_var):\n        return 0.5 * torch.log(q_var/p_var)+ ((p_var)+(p_mean-q_mean)**2)/(2*q_var) - 0.5\n    \n    def _js_penalty(self, p_mean, q_mean, p_var, q_var):\n        m_mean = (p_mean + q_mean) / 2\n        m_var = (p_var + q_var) / 4\n        \n        return self._kl_penalty(p_mean, m_mean, p_var, m_var) + self._kl_penalty(q_mean, m_mean, q_var, m_var)\n    \n    def update(self, minibatches, unlabeled=None, held_out_minibatches=None):\n        matching_penalty_weight = (self.hparams['rdm_lambda'] if self.update_count\n                          >= self.hparams['rdm_penalty_anneal_iters'] else\n                          0.)\n        \n        variance_penalty_weight = (self.hparams['variance_weight'] if self.update_count\n                          >= self.hparams['rdm_penalty_anneal_iters'] else\n                          0.)\n\n        all_x = torch.cat([x for x, y in minibatches])\n        all_logits = self.predict(all_x)\n        losses = torch.zeros(len(minibatches)).cuda()\n        all_logits_idx = 0\n        all_confs_envs = None\n        \n        for i, (x, y) in enumerate(minibatches):\n            logits = all_logits[all_logits_idx:all_logits_idx + x.shape[0]]\n            all_logits_idx += x.shape[0]\n            losses[i] = F.cross_entropy(logits, y)\n            \n            nll = F.cross_entropy(logits, y, reduction = \"none\").unsqueeze(0)\n        \n            if all_confs_envs is None:\n                all_confs_envs = nll\n            else:\n                all_confs_envs = torch.cat([all_confs_envs, nll], dim = 0)\n                \n        erm_loss = losses.mean()\n        \n        ## squeeze the risks\n        all_confs_envs = torch.squeeze(all_confs_envs)\n        \n        ## find the worst domain\n        worst_env_idx = torch.argmax(torch.clone(losses))\n        all_confs_worst_env = all_confs_envs[worst_env_idx]\n\n        ## flatten the risk\n        all_confs_worst_env_flat = torch.flatten(all_confs_worst_env)\n        all_confs_all_envs_flat = torch.flatten(all_confs_envs)\n    \n        matching_penalty = self.mmd(all_confs_worst_env_flat.unsqueeze(1), all_confs_all_envs_flat.unsqueeze(1)) \n        \n        ## variance penalty\n        variance_penalty = torch.var(all_confs_all_envs_flat)\n        variance_penalty += torch.var(all_confs_worst_env_flat)\n        \n        total_loss = erm_loss + matching_penalty_weight * matching_penalty + variance_penalty_weight * variance_penalty\n            \n        if self.update_count == self.hparams['rdm_penalty_anneal_iters']:\n            # Reset Adam, because it doesn't like the sharp jump in gradient\n            # magnitudes that happens at this step.\n            self.optimizer = torch.optim.Adam(\n                self.network.parameters(),\n                lr=self.hparams[\"rdm_lr\"],\n                weight_decay=self.hparams['weight_decay'])\n        \n        self.optimizer.zero_grad()\n        total_loss.backward()\n        self.optimizer.step()\n        \n        self.update_count += 1\n\n        return {'update_count': self.update_count.item(), 'total_loss': total_loss.item(), 'erm_loss': erm_loss.item(), 'matching_penalty': matching_penalty.item(), 'variance_penalty': variance_penalty.item(), 'rdm_lambda' : self.hparams['rdm_lambda']}\n\nclass VREx(ERM):\n    \"\"\"V-REx algorithm from http://arxiv.org/abs/2003.00688\"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(VREx, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams)\n        self.register_buffer('update_count', torch.tensor([0]))\n\n    def update(self, minibatches, unlabeled=None):\n        if self.update_count >= self.hparams[\"vrex_penalty_anneal_iters\"]:\n            penalty_weight = self.hparams[\"vrex_lambda\"]\n        else:\n            penalty_weight = 1.0\n\n        nll = 0.\n\n        all_x = torch.cat([x for x, y in minibatches])\n        all_logits = self.network(all_x)\n        all_logits_idx = 0\n        losses = torch.zeros(len(minibatches))\n        for i, (x, y) in enumerate(minibatches):\n            logits = all_logits[all_logits_idx:all_logits_idx + x.shape[0]]\n            all_logits_idx += x.shape[0]\n            nll = F.cross_entropy(logits, y)\n            losses[i] = nll\n\n        mean = losses.mean()\n        penalty = ((losses - mean) ** 2).mean()\n        loss = mean + penalty_weight * penalty\n\n        if self.update_count == self.hparams['vrex_penalty_anneal_iters']:\n            # Reset Adam (like IRM), because it doesn't like the sharp jump in\n            # gradient magnitudes that happens at this step.\n            self.optimizer = torch.optim.Adam(\n                self.network.parameters(),\n                lr=self.hparams[\"lr\"],\n                weight_decay=self.hparams['weight_decay'])\n\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        self.update_count += 1\n        return {'loss': loss.item(), 'nll': nll.item(),\n                'penalty': penalty.item()}\n\n\nclass Mixup(ERM):\n    \"\"\"\n    Mixup of minibatches from different domains\n    https://arxiv.org/pdf/2001.00677.pdf\n    https://arxiv.org/pdf/1912.01805.pdf\n    \"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(Mixup, self).__init__(input_shape, num_classes, num_domains,\n                                    hparams)\n\n    def update(self, minibatches, unlabeled=None):\n        objective = 0\n\n        for (xi, yi), (xj, yj) in random_pairs_of_minibatches(minibatches):\n            lam = np.random.beta(self.hparams[\"mixup_alpha\"],\n                                 self.hparams[\"mixup_alpha\"])\n\n            x = lam * xi + (1 - lam) * xj\n            predictions = self.predict(x)\n\n            objective += lam * F.cross_entropy(predictions, yi)\n            objective += (1 - lam) * F.cross_entropy(predictions, yj)\n\n        objective /= len(minibatches)\n\n        self.optimizer.zero_grad()\n        objective.backward()\n        self.optimizer.step()\n\n        return {'loss': objective.item()}\n\n\nclass GroupDRO(ERM):\n    \"\"\"\n    Robust ERM minimizes the error at the worst minibatch\n    Algorithm 1 from [https://arxiv.org/pdf/1911.08731.pdf]\n    \"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(GroupDRO, self).__init__(input_shape, num_classes, num_domains,\n                                        hparams)\n        self.register_buffer(\"q\", torch.Tensor())\n\n    def update(self, minibatches, unlabeled=None):\n        device = \"cuda\" if minibatches[0][0].is_cuda else \"cpu\"\n\n        if not len(self.q):\n            self.q = torch.ones(len(minibatches)).to(device)\n\n        losses = torch.zeros(len(minibatches)).to(device)\n\n        for m in range(len(minibatches)):\n            x, y = minibatches[m]\n            losses[m] = F.cross_entropy(self.predict(x), y)\n            self.q[m] *= (self.hparams[\"groupdro_eta\"] * losses[m].data).exp()\n\n        self.q /= self.q.sum()\n\n        loss = torch.dot(losses, self.q)\n\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        return {'loss': loss.item()}\n\n\nclass MLDG(ERM):\n    \"\"\"\n    Model-Agnostic Meta-Learning\n    Algorithm 1 / Equation (3) from: https://arxiv.org/pdf/1710.03463.pdf\n    Related: https://arxiv.org/pdf/1703.03400.pdf\n    Related: https://arxiv.org/pdf/1910.13580.pdf\n    \"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(MLDG, self).__init__(input_shape, num_classes, num_domains,\n                                   hparams)\n        self.num_meta_test = hparams['n_meta_test']\n\n    def update(self, minibatches, unlabeled=None):\n        \"\"\"\n        Terms being computed:\n            * Li = Loss(xi, yi, params)\n            * Gi = Grad(Li, params)\n\n            * Lj = Loss(xj, yj, Optimizer(params, grad(Li, params)))\n            * Gj = Grad(Lj, params)\n\n            * params = Optimizer(params, Grad(Li + beta * Lj, params))\n            *        = Optimizer(params, Gi + beta * Gj)\n\n        That is, when calling .step(), we want grads to be Gi + beta * Gj\n\n        For computational efficiency, we do not compute second derivatives.\n        \"\"\"\n        num_mb = len(minibatches)\n        objective = 0\n\n        self.optimizer.zero_grad()\n        for p in self.network.parameters():\n            if p.grad is None:\n                p.grad = torch.zeros_like(p)\n\n        for (xi, yi), (xj, yj) in split_meta_train_test(minibatches, self.num_meta_test):\n            # fine tune clone-network on task \"i\"\n            inner_net = copy.deepcopy(self.network)\n\n            inner_opt = torch.optim.Adam(\n                inner_net.parameters(),\n                lr=self.hparams[\"lr\"],\n                weight_decay=self.hparams['weight_decay']\n            )\n\n            inner_obj = F.cross_entropy(inner_net(xi), yi)\n\n            inner_opt.zero_grad()\n            inner_obj.backward()\n            inner_opt.step()\n\n            # The network has now accumulated gradients Gi\n            # The clone-network has now parameters P - lr * Gi\n            for p_tgt, p_src in zip(self.network.parameters(),\n                                    inner_net.parameters()):\n                if p_src.grad is not None:\n                    p_tgt.grad.data.add_(p_src.grad.data / num_mb)\n\n            # `objective` is populated for reporting purposes\n            objective += inner_obj.item()\n\n            # this computes Gj on the clone-network\n            loss_inner_j = F.cross_entropy(inner_net(xj), yj)\n            grad_inner_j = autograd.grad(loss_inner_j, inner_net.parameters(),\n                allow_unused=True)\n\n            # `objective` is populated for reporting purposes\n            objective += (self.hparams['mldg_beta'] * loss_inner_j).item()\n\n            for p, g_j in zip(self.network.parameters(), grad_inner_j):\n                if g_j is not None:\n                    p.grad.data.add_(\n                        self.hparams['mldg_beta'] * g_j.data / num_mb)\n\n            # The network has now accumulated gradients Gi + beta * Gj\n            # Repeat for all train-test splits, do .step()\n\n        objective /= len(minibatches)\n\n        self.optimizer.step()\n\n        return {'loss': objective}\n\n    # This commented \"update\" method back-propagates through the gradients of\n    # the inner update, as suggested in the original MAML paper.  However, this\n    # is twice as expensive as the uncommented \"update\" method, which does not\n    # compute second-order derivatives, implementing the First-Order MAML\n    # method (FOMAML) described in the original MAML paper.\n\n    # def update(self, minibatches, unlabeled=None):\n    #     objective = 0\n    #     beta = self.hparams[\"beta\"]\n    #     inner_iterations = self.hparams[\"inner_iterations\"]\n\n    #     self.optimizer.zero_grad()\n\n    #     with higher.innerloop_ctx(self.network, self.optimizer,\n    #         copy_initial_weights=False) as (inner_network, inner_optimizer):\n\n    #         for (xi, yi), (xj, yj) in random_pairs_of_minibatches(minibatches):\n    #             for inner_iteration in range(inner_iterations):\n    #                 li = F.cross_entropy(inner_network(xi), yi)\n    #                 inner_optimizer.step(li)\n    #\n    #             objective += F.cross_entropy(self.network(xi), yi)\n    #             objective += beta * F.cross_entropy(inner_network(xj), yj)\n\n    #         objective /= len(minibatches)\n    #         objective.backward()\n    #\n    #     self.optimizer.step()\n    #\n    #     return objective\n\n\nclass AbstractMMD(ERM):\n    \"\"\"\n    Perform ERM while matching the pair-wise domain feature distributions\n    using MMD (abstract class)\n    \"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams, gaussian):\n        super(AbstractMMD, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams)\n        if gaussian:\n            self.kernel_type = \"gaussian\"\n        else:\n            self.kernel_type = \"mean_cov\"\n\n    def my_cdist(self, x1, x2):\n        x1_norm = x1.pow(2).sum(dim=-1, keepdim=True)\n        x2_norm = x2.pow(2).sum(dim=-1, keepdim=True)\n        res = torch.addmm(x2_norm.transpose(-2, -1),\n                          x1,\n                          x2.transpose(-2, -1), alpha=-2).add_(x1_norm)\n        return res.clamp_min_(1e-30)\n\n    def gaussian_kernel(self, x, y, gamma=[0.001, 0.01, 0.1, 1, 10, 100,\n                                           1000]):\n        D = self.my_cdist(x, y)\n        K = torch.zeros_like(D)\n\n        for g in gamma:\n            K.add_(torch.exp(D.mul(-g)))\n\n        return K\n\n    def mmd(self, x, y):\n        if self.kernel_type == \"gaussian\":\n            Kxx = self.gaussian_kernel(x, x).mean()\n            Kyy = self.gaussian_kernel(y, y).mean()\n            Kxy = self.gaussian_kernel(x, y).mean()\n            return Kxx + Kyy - 2 * Kxy\n        else:\n            mean_x = x.mean(0, keepdim=True)\n            mean_y = y.mean(0, keepdim=True)\n            cent_x = x - mean_x\n            cent_y = y - mean_y\n            cova_x = (cent_x.t() @ cent_x) / (len(x) - 1)\n            cova_y = (cent_y.t() @ cent_y) / (len(y) - 1)\n\n            mean_diff = (mean_x - mean_y).pow(2).mean()\n            cova_diff = (cova_x - cova_y).pow(2).mean()\n\n            return mean_diff + cova_diff\n\n    def update(self, minibatches, unlabeled=None):\n        objective = 0\n        penalty = 0\n        nmb = len(minibatches)\n\n        features = [self.featurizer(xi) for xi, _ in minibatches]\n        classifs = [self.classifier(fi) for fi in features]\n        targets = [yi for _, yi in minibatches]\n\n        for i in range(nmb):\n            objective += F.cross_entropy(classifs[i], targets[i])\n            for j in range(i + 1, nmb):\n                penalty += self.mmd(features[i], features[j])\n\n        objective /= nmb\n        if nmb > 1:\n            penalty /= (nmb * (nmb - 1) / 2)\n\n        self.optimizer.zero_grad()\n        (objective + (self.hparams['mmd_gamma']*penalty)).backward()\n        self.optimizer.step()\n\n        if torch.is_tensor(penalty):\n            penalty = penalty.item()\n\n        return {'loss': objective.item(), 'penalty': penalty}\n\n\nclass MMD(AbstractMMD):\n    \"\"\"\n    MMD using Gaussian kernel\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(MMD, self).__init__(input_shape, num_classes,\n                                          num_domains, hparams, gaussian=True)\n\n\nclass CORAL(AbstractMMD):\n    \"\"\"\n    MMD using mean and covariance difference\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(CORAL, self).__init__(input_shape, num_classes,\n                                         num_domains, hparams, gaussian=False)\n\n\nclass MTL(Algorithm):\n    \"\"\"\n    A neural network version of\n    Domain Generalization by Marginal Transfer Learning\n    (https://arxiv.org/abs/1711.07910)\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(MTL, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams)\n        self.featurizer = networks.Featurizer(input_shape, self.hparams)\n        self.classifier = networks.Classifier(\n            self.featurizer.n_outputs * 2,\n            num_classes,\n            self.hparams['nonlinear_classifier'])\n        self.optimizer = torch.optim.Adam(\n            list(self.featurizer.parameters()) +\\\n            list(self.classifier.parameters()),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay']\n        )\n\n        self.register_buffer('embeddings',\n                             torch.zeros(num_domains,\n                                         self.featurizer.n_outputs))\n\n        self.ema = self.hparams['mtl_ema']\n\n    def update(self, minibatches, unlabeled=None):\n        loss = 0\n        for env, (x, y) in enumerate(minibatches):\n            loss += F.cross_entropy(self.predict(x, env), y)\n\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        return {'loss': loss.item()}\n\n    def update_embeddings_(self, features, env=None):\n        return_embedding = features.mean(0)\n\n        if env is not None:\n            return_embedding = self.ema * return_embedding +\\\n                               (1 - self.ema) * self.embeddings[env]\n\n            self.embeddings[env] = return_embedding.clone().detach()\n\n        return return_embedding.view(1, -1).repeat(len(features), 1)\n\n    def predict(self, x, env=None):\n        features = self.featurizer(x)\n        embedding = self.update_embeddings_(features, env).normal_()\n        return self.classifier(torch.cat((features, embedding), 1))\n\nclass SagNet(Algorithm):\n    \"\"\"\n    Style Agnostic Network\n    Algorithm 1 from: https://arxiv.org/abs/1910.11645\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(SagNet, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams)\n        # featurizer network\n        self.network_f = networks.Featurizer(input_shape, self.hparams)\n        # content network\n        self.network_c = networks.Classifier(\n            self.network_f.n_outputs,\n            num_classes,\n            self.hparams['nonlinear_classifier'])\n        # style network\n        self.network_s = networks.Classifier(\n            self.network_f.n_outputs,\n            num_classes,\n            self.hparams['nonlinear_classifier'])\n\n        # # This commented block of code implements something closer to the\n        # # original paper, but is specific to ResNet and puts in disadvantage\n        # # the other algorithms.\n        # resnet_c = networks.Featurizer(input_shape, self.hparams)\n        # resnet_s = networks.Featurizer(input_shape, self.hparams)\n        # # featurizer network\n        # self.network_f = torch.nn.Sequential(\n        #         resnet_c.network.conv1,\n        #         resnet_c.network.bn1,\n        #         resnet_c.network.relu,\n        #         resnet_c.network.maxpool,\n        #         resnet_c.network.layer1,\n        #         resnet_c.network.layer2,\n        #         resnet_c.network.layer3)\n        # # content network\n        # self.network_c = torch.nn.Sequential(\n        #         resnet_c.network.layer4,\n        #         resnet_c.network.avgpool,\n        #         networks.Flatten(),\n        #         resnet_c.network.fc)\n        # # style network\n        # self.network_s = torch.nn.Sequential(\n        #         resnet_s.network.layer4,\n        #         resnet_s.network.avgpool,\n        #         networks.Flatten(),\n        #         resnet_s.network.fc)\n\n        def opt(p):\n            return torch.optim.Adam(p, lr=hparams[\"lr\"],\n                    weight_decay=hparams[\"weight_decay\"])\n\n        self.optimizer_f = opt(self.network_f.parameters())\n        self.optimizer_c = opt(self.network_c.parameters())\n        self.optimizer_s = opt(self.network_s.parameters())\n        self.weight_adv = hparams[\"sag_w_adv\"]\n\n    def forward_c(self, x):\n        # learning content network on randomized style\n        return self.network_c(self.randomize(self.network_f(x), \"style\"))\n\n    def forward_s(self, x):\n        # learning style network on randomized content\n        return self.network_s(self.randomize(self.network_f(x), \"content\"))\n\n    def randomize(self, x, what=\"style\", eps=1e-5):\n        device = \"cuda\" if x.is_cuda else \"cpu\"\n        sizes = x.size()\n        alpha = torch.rand(sizes[0], 1).to(device)\n\n        if len(sizes) == 4:\n            x = x.view(sizes[0], sizes[1], -1)\n            alpha = alpha.unsqueeze(-1)\n\n        mean = x.mean(-1, keepdim=True)\n        var = x.var(-1, keepdim=True)\n\n        x = (x - mean) / (var + eps).sqrt()\n\n        idx_swap = torch.randperm(sizes[0])\n        if what == \"style\":\n            mean = alpha * mean + (1 - alpha) * mean[idx_swap]\n            var = alpha * var + (1 - alpha) * var[idx_swap]\n        else:\n            x = x[idx_swap].detach()\n\n        x = x * (var + eps).sqrt() + mean\n        return x.view(*sizes)\n\n    def update(self, minibatches, unlabeled=None):\n        all_x = torch.cat([x for x, y in minibatches])\n        all_y = torch.cat([y for x, y in minibatches])\n\n        # learn content\n        self.optimizer_f.zero_grad()\n        self.optimizer_c.zero_grad()\n        loss_c = F.cross_entropy(self.forward_c(all_x), all_y)\n        loss_c.backward()\n        self.optimizer_f.step()\n        self.optimizer_c.step()\n\n        # learn style\n        self.optimizer_s.zero_grad()\n        loss_s = F.cross_entropy(self.forward_s(all_x), all_y)\n        loss_s.backward()\n        self.optimizer_s.step()\n\n        # learn adversary\n        self.optimizer_f.zero_grad()\n        loss_adv = -F.log_softmax(self.forward_s(all_x), dim=1).mean(1).mean()\n        loss_adv = loss_adv * self.weight_adv\n        loss_adv.backward()\n        self.optimizer_f.step()\n\n        return {'loss_c': loss_c.item(), 'loss_s': loss_s.item(),\n                'loss_adv': loss_adv.item()}\n\n    def predict(self, x):\n        return self.network_c(self.network_f(x))\n\n\nclass RSC(ERM):\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(RSC, self).__init__(input_shape, num_classes, num_domains,\n                                   hparams)\n        self.drop_f = (1 - hparams['rsc_f_drop_factor']) * 100\n        self.drop_b = (1 - hparams['rsc_b_drop_factor']) * 100\n        self.num_classes = num_classes\n\n    def update(self, minibatches, unlabeled=None):\n        device = \"cuda\" if minibatches[0][0].is_cuda else \"cpu\"\n\n        # inputs\n        all_x = torch.cat([x for x, y in minibatches])\n        # labels\n        all_y = torch.cat([y for _, y in minibatches])\n        # one-hot labels\n        all_o = torch.nn.functional.one_hot(all_y, self.num_classes)\n        # features\n        all_f = self.featurizer(all_x)\n        # predictions\n        all_p = self.classifier(all_f)\n\n        # Equation (1): compute gradients with respect to representation\n        all_g = autograd.grad((all_p * all_o).sum(), all_f)[0]\n\n        # Equation (2): compute top-gradient-percentile mask\n        percentiles = np.percentile(all_g.cpu(), self.drop_f, axis=1)\n        percentiles = torch.Tensor(percentiles)\n        percentiles = percentiles.unsqueeze(1).repeat(1, all_g.size(1))\n        mask_f = all_g.lt(percentiles.to(device)).float()\n\n        # Equation (3): mute top-gradient-percentile activations\n        all_f_muted = all_f * mask_f\n\n        # Equation (4): compute muted predictions\n        all_p_muted = self.classifier(all_f_muted)\n\n        # Section 3.3: Batch Percentage\n        all_s = F.softmax(all_p, dim=1)\n        all_s_muted = F.softmax(all_p_muted, dim=1)\n        changes = (all_s * all_o).sum(1) - (all_s_muted * all_o).sum(1)\n        percentile = np.percentile(changes.detach().cpu(), self.drop_b)\n        mask_b = changes.lt(percentile).float().view(-1, 1)\n        mask = torch.logical_or(mask_f, mask_b).float()\n\n        # Equations (3) and (4) again, this time mutting over examples\n        all_p_muted_again = self.classifier(all_f * mask)\n\n        # Equation (5): update\n        loss = F.cross_entropy(all_p_muted_again, all_y)\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        return {'loss': loss.item()}\n\n\nclass SD(ERM):\n    \"\"\"\n    Gradient Starvation: A Learning Proclivity in Neural Networks\n    Equation 25 from [https://arxiv.org/pdf/2011.09468.pdf]\n    \"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(SD, self).__init__(input_shape, num_classes, num_domains,\n                                        hparams)\n        self.sd_reg = hparams[\"sd_reg\"]\n\n    def update(self, minibatches, unlabeled=None):\n        all_x = torch.cat([x for x, y in minibatches])\n        all_y = torch.cat([y for x, y in minibatches])\n        all_p = self.predict(all_x)\n\n        loss = F.cross_entropy(all_p, all_y)\n        penalty = (all_p ** 2).mean()\n        objective = loss + self.sd_reg * penalty\n\n        self.optimizer.zero_grad()\n        objective.backward()\n        self.optimizer.step()\n\n        return {'loss': loss.item(), 'penalty': penalty.item()}\n\nclass ANDMask(ERM):\n    \"\"\"\n    Learning Explanations that are Hard to Vary [https://arxiv.org/abs/2009.00329]\n    AND-Mask implementation from [https://github.com/gibipara92/learning-explanations-hard-to-vary]\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(ANDMask, self).__init__(input_shape, num_classes, num_domains, hparams)\n\n        self.tau = hparams[\"tau\"]\n\n    def update(self, minibatches, unlabeled=None):\n        mean_loss = 0\n        param_gradients = [[] for _ in self.network.parameters()]\n        for i, (x, y) in enumerate(minibatches):\n            logits = self.network(x)\n\n            env_loss = F.cross_entropy(logits, y)\n            mean_loss += env_loss.item() / len(minibatches)\n\n            env_grads = autograd.grad(env_loss, self.network.parameters())\n            for grads, env_grad in zip(param_gradients, env_grads):\n                grads.append(env_grad)\n\n        self.optimizer.zero_grad()\n        self.mask_grads(self.tau, param_gradients, self.network.parameters())\n        self.optimizer.step()\n\n        return {'loss': mean_loss}\n\n    def mask_grads(self, tau, gradients, params):\n\n        for param, grads in zip(params, gradients):\n            grads = torch.stack(grads, dim=0)\n            grad_signs = torch.sign(grads)\n            mask = torch.mean(grad_signs, dim=0).abs() >= self.tau\n            mask = mask.to(torch.float32)\n            avg_grad = torch.mean(grads, dim=0)\n\n            mask_t = (mask.sum() / mask.numel())\n            param.grad = mask * avg_grad\n            param.grad *= (1. / (1e-10 + mask_t))\n\n        return 0\n\nclass IGA(ERM):\n    \"\"\"\n    Inter-environmental Gradient Alignment\n    From https://arxiv.org/abs/2008.01883v2\n    \"\"\"\n\n    def __init__(self, in_features, num_classes, num_domains, hparams):\n        super(IGA, self).__init__(in_features, num_classes, num_domains, hparams)\n\n    def update(self, minibatches, unlabeled=None):\n        total_loss = 0\n        grads = []\n        for i, (x, y) in enumerate(minibatches):\n            logits = self.network(x)\n\n            env_loss = F.cross_entropy(logits, y)\n            total_loss += env_loss\n\n            env_grad = autograd.grad(env_loss, self.network.parameters(),\n                                        create_graph=True)\n\n            grads.append(env_grad)\n\n        mean_loss = total_loss / len(minibatches)\n        mean_grad = autograd.grad(mean_loss, self.network.parameters(),\n                                        retain_graph=True)\n\n        # compute trace penalty\n        penalty_value = 0\n        for grad in grads:\n            for g, mean_g in zip(grad, mean_grad):\n                penalty_value += (g - mean_g).pow(2).sum()\n\n        objective = mean_loss + self.hparams['penalty'] * penalty_value\n\n        self.optimizer.zero_grad()\n        objective.backward()\n        self.optimizer.step()\n\n        return {'loss': mean_loss.item(), 'penalty': penalty_value.item()}\n\n\nclass SelfReg(ERM):\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(SelfReg, self).__init__(input_shape, num_classes, num_domains,\n                                   hparams)\n        self.num_classes = num_classes\n        self.MSEloss = nn.MSELoss()\n        input_feat_size = self.featurizer.n_outputs\n        hidden_size = input_feat_size if input_feat_size==2048 else input_feat_size*2\n\n        self.cdpl = nn.Sequential(\n                            nn.Linear(input_feat_size, hidden_size),\n                            nn.BatchNorm1d(hidden_size),\n                            nn.ReLU(inplace=True),\n                            nn.Linear(hidden_size, hidden_size),\n                            nn.BatchNorm1d(hidden_size),\n                            nn.ReLU(inplace=True),\n                            nn.Linear(hidden_size, input_feat_size),\n                            nn.BatchNorm1d(input_feat_size)\n        )\n\n    def update(self, minibatches, unlabeled=None):\n\n        all_x = torch.cat([x for x, y in minibatches])\n        all_y = torch.cat([y for _, y in minibatches])\n\n        lam = np.random.beta(0.5, 0.5)\n\n        batch_size = all_y.size()[0]\n\n        # cluster and order features into same-class group\n        with torch.no_grad():\n            sorted_y, indices = torch.sort(all_y)\n            sorted_x = torch.zeros_like(all_x)\n            for idx, order in enumerate(indices):\n                sorted_x[idx] = all_x[order]\n            intervals = []\n            ex = 0\n            for idx, val in enumerate(sorted_y):\n                if ex==val:\n                    continue\n                intervals.append(idx)\n                ex = val\n            intervals.append(batch_size)\n\n            all_x = sorted_x\n            all_y = sorted_y\n\n        feat = self.featurizer(all_x)\n        proj = self.cdpl(feat)\n\n        output = self.classifier(feat)\n\n        # shuffle\n        output_2 = torch.zeros_like(output)\n        feat_2 = torch.zeros_like(proj)\n        output_3 = torch.zeros_like(output)\n        feat_3 = torch.zeros_like(proj)\n        ex = 0\n        for end in intervals:\n            shuffle_indices = torch.randperm(end-ex)+ex\n            shuffle_indices2 = torch.randperm(end-ex)+ex\n            for idx in range(end-ex):\n                output_2[idx+ex] = output[shuffle_indices[idx]]\n                feat_2[idx+ex] = proj[shuffle_indices[idx]]\n                output_3[idx+ex] = output[shuffle_indices2[idx]]\n                feat_3[idx+ex] = proj[shuffle_indices2[idx]]\n            ex = end\n\n        # mixup\n        output_3 = lam*output_2 + (1-lam)*output_3\n        feat_3 = lam*feat_2 + (1-lam)*feat_3\n\n        # regularization\n        L_ind_logit = self.MSEloss(output, output_2)\n        L_hdl_logit = self.MSEloss(output, output_3)\n        L_ind_feat = 0.3 * self.MSEloss(feat, feat_2)\n        L_hdl_feat = 0.3 * self.MSEloss(feat, feat_3)\n\n        cl_loss = F.cross_entropy(output, all_y)\n        C_scale = min(cl_loss.item(), 1.)\n        loss = cl_loss + C_scale*(lam*(L_ind_logit + L_ind_feat)+(1-lam)*(L_hdl_logit + L_hdl_feat))\n\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        return {'loss': loss.item()}\n\n\nclass SANDMask(ERM):\n    \"\"\"\n    SAND-mask: An Enhanced Gradient Masking Strategy for the Discovery of Invariances in Domain Generalization\n    <https://arxiv.org/abs/2106.02266>\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(SANDMask, self).__init__(input_shape, num_classes, num_domains, hparams)\n\n        self.tau = hparams[\"tau\"]\n        self.k = hparams[\"k\"]\n        betas = (0.9, 0.999)\n        self.optimizer = torch.optim.Adam(\n            self.network.parameters(),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay'],\n            betas=betas\n        )\n\n        self.register_buffer('update_count', torch.tensor([0]))\n\n    def update(self, minibatches, unlabeled=None):\n\n        mean_loss = 0\n        param_gradients = [[] for _ in self.network.parameters()]\n        for i, (x, y) in enumerate(minibatches):\n            logits = self.network(x)\n\n            env_loss = F.cross_entropy(logits, y)\n            mean_loss += env_loss.item() / len(minibatches)\n            env_grads = autograd.grad(env_loss, self.network.parameters(), retain_graph=True)\n            for grads, env_grad in zip(param_gradients, env_grads):\n                grads.append(env_grad)\n\n        self.optimizer.zero_grad()\n        # gradient masking applied here\n        self.mask_grads(param_gradients, self.network.parameters())\n        self.optimizer.step()\n        self.update_count += 1\n\n        return {'loss': mean_loss}\n\n    def mask_grads(self, gradients, params):\n        '''\n        Here a mask with continuous values in the range [0,1] is formed to control the amount of update for each\n        parameter based on the agreement of gradients coming from different environments.\n        '''\n        device = gradients[0][0].device\n        for param, grads in zip(params, gradients):\n            grads = torch.stack(grads, dim=0)\n            avg_grad = torch.mean(grads, dim=0)\n            grad_signs = torch.sign(grads)\n            gamma = torch.tensor(1.0).to(device)\n            grads_var = grads.var(dim=0)\n            grads_var[torch.isnan(grads_var)] = 1e-17\n            lam = (gamma * grads_var).pow(-1)\n            mask = torch.tanh(self.k * lam * (torch.abs(grad_signs.mean(dim=0)) - self.tau))\n            mask = torch.max(mask, torch.zeros_like(mask))\n            mask[torch.isnan(mask)] = 1e-17\n            mask_t = (mask.sum() / mask.numel())\n            param.grad = mask * avg_grad\n            param.grad *= (1. / (1e-10 + mask_t))\n\n\n\nclass Fishr(Algorithm):\n    \"Invariant Gradients variances for Out-of-distribution Generalization\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        assert backpack is not None, \"Install backpack with: 'pip install backpack-for-pytorch==1.3.0'\"\n        super(Fishr, self).__init__(input_shape, num_classes, num_domains, hparams)\n        self.num_domains = num_domains\n\n        self.featurizer = networks.Featurizer(input_shape, self.hparams)\n        self.classifier = extend(\n            networks.Classifier(\n                self.featurizer.n_outputs,\n                num_classes,\n                self.hparams['nonlinear_classifier'],\n            )\n        )\n        self.network = nn.Sequential(self.featurizer, self.classifier)\n\n        self.register_buffer(\"update_count\", torch.tensor([0]))\n        self.bce_extended = extend(nn.CrossEntropyLoss(reduction='none'))\n        self.ema_per_domain = [\n            MovingAverage(ema=self.hparams[\"ema\"], oneminusema_correction=True)\n            for _ in range(self.num_domains)\n        ]\n        self._init_optimizer()\n\n    def _init_optimizer(self):\n        self.optimizer = torch.optim.Adam(\n            list(self.featurizer.parameters()) + list(self.classifier.parameters()),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams[\"weight_decay\"],\n        )\n\n    def update(self, minibatches, unlabeled=None):\n        assert len(minibatches) == self.num_domains\n        all_x = torch.cat([x for x, y in minibatches])\n        all_y = torch.cat([y for x, y in minibatches])\n        len_minibatches = [x.shape[0] for x, y in minibatches]\n\n        all_z = self.featurizer(all_x)\n        all_logits = self.classifier(all_z)\n\n        penalty = self.compute_fishr_penalty(all_logits, all_y, len_minibatches)\n        all_nll = F.cross_entropy(all_logits, all_y)\n\n        penalty_weight = 0\n        if self.update_count >= self.hparams[\"penalty_anneal_iters\"]:\n            penalty_weight = self.hparams[\"lambda\"]\n            if self.update_count == self.hparams[\"penalty_anneal_iters\"] != 0:\n                # Reset Adam as in IRM or V-REx, because it may not like the sharp jump in\n                # gradient magnitudes that happens at this step.\n                self._init_optimizer()\n        self.update_count += 1\n\n        objective = all_nll + penalty_weight * penalty\n        self.optimizer.zero_grad()\n        objective.backward()\n        self.optimizer.step()\n\n        return {'loss': objective.item(), 'nll': all_nll.item(), 'penalty': penalty.item()}\n\n    def compute_fishr_penalty(self, all_logits, all_y, len_minibatches):\n        dict_grads = self._get_grads(all_logits, all_y)\n        grads_var_per_domain = self._get_grads_var_per_domain(dict_grads, len_minibatches)\n        return self._compute_distance_grads_var(grads_var_per_domain)\n\n    def _get_grads(self, logits, y):\n        self.optimizer.zero_grad()\n        loss = self.bce_extended(logits, y).sum()\n        with backpack(BatchGrad()):\n            loss.backward(\n                inputs=list(self.classifier.parameters()), retain_graph=True, create_graph=True\n            )\n\n        # compute individual grads for all samples across all domains simultaneously\n        dict_grads = OrderedDict(\n            [\n                (name, weights.grad_batch.clone().view(weights.grad_batch.size(0), -1))\n                for name, weights in self.classifier.named_parameters()\n            ]\n        )\n        return dict_grads\n\n    def _get_grads_var_per_domain(self, dict_grads, len_minibatches):\n        # grads var per domain\n        grads_var_per_domain = [{} for _ in range(self.num_domains)]\n        for name, _grads in dict_grads.items():\n            all_idx = 0\n            for domain_id, bsize in enumerate(len_minibatches):\n                env_grads = _grads[all_idx:all_idx + bsize]\n                all_idx += bsize\n                env_mean = env_grads.mean(dim=0, keepdim=True)\n                env_grads_centered = env_grads - env_mean\n                grads_var_per_domain[domain_id][name] = (env_grads_centered).pow(2).mean(dim=0)\n\n        # moving average\n        for domain_id in range(self.num_domains):\n            grads_var_per_domain[domain_id] = self.ema_per_domain[domain_id].update(\n                grads_var_per_domain[domain_id]\n            )\n\n        return grads_var_per_domain\n\n    def _compute_distance_grads_var(self, grads_var_per_domain):\n\n        # compute gradient variances averaged across domains\n        grads_var = OrderedDict(\n            [\n                (\n                    name,\n                    torch.stack(\n                        [\n                            grads_var_per_domain[domain_id][name]\n                            for domain_id in range(self.num_domains)\n                        ],\n                        dim=0\n                    ).mean(dim=0)\n                )\n                for name in grads_var_per_domain[0].keys()\n            ]\n        )\n\n        penalty = 0\n        for domain_id in range(self.num_domains):\n            penalty += l2_between_dicts(grads_var_per_domain[domain_id], grads_var)\n        return penalty / self.num_domains\n\n    def predict(self, x):\n        return self.network(x)\n\nclass TRM(Algorithm):\n    \"\"\"\n    Learning Representations that Support Robust Transfer of Predictors\n    <https://arxiv.org/abs/2110.09940>\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(TRM, self).__init__(input_shape, num_classes, num_domains,hparams)\n        self.register_buffer('update_count', torch.tensor([0]))\n        self.num_domains = num_domains\n        self.featurizer = networks.Featurizer(input_shape, self.hparams)\n        self.classifier = nn.Linear(self.featurizer.n_outputs, num_classes).cuda()\n        self.clist = [nn.Linear(self.featurizer.n_outputs, num_classes).cuda() for i in range(num_domains+1)]\n        self.olist = [torch.optim.SGD(\n            self.clist[i].parameters(),\n            lr=1e-1,\n        ) for i in range(num_domains+1)]\n\n        self.optimizer_f = torch.optim.Adam(\n            self.featurizer.parameters(),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay']\n        )\n        self.optimizer_c = torch.optim.Adam(\n            self.classifier.parameters(),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay']\n        )\n        # initial weights\n        self.alpha = torch.ones((num_domains, num_domains)).cuda() - torch.eye(num_domains).cuda()\n\n    @staticmethod\n    def neum(v, model, batch):\n        def hvp(y, w, v):\n\n            # First backprop\n            first_grads = autograd.grad(y, w, retain_graph=True, create_graph=True, allow_unused=True)\n            first_grads = torch.nn.utils.parameters_to_vector(first_grads)\n            # Elementwise products\n            elemwise_products = first_grads @ v\n            # Second backprop\n            return_grads = autograd.grad(elemwise_products, w, create_graph=True)\n            return_grads = torch.nn.utils.parameters_to_vector(return_grads)\n            return return_grads\n\n        v = v.detach()\n        h_estimate = v\n        cnt = 0.\n        model.eval()\n        iter = 10\n        for i in range(iter):\n            model.weight.grad *= 0\n            y = model(batch[0].detach())\n            loss = F.cross_entropy(y, batch[1].detach())\n            hv = hvp(loss, model.weight, v)\n            v -= hv\n            v = v.detach()\n            h_estimate = v + h_estimate\n            h_estimate = h_estimate.detach()\n            # not converge\n            if torch.max(abs(h_estimate)) > 10:\n                break\n            cnt += 1\n\n        model.train()\n        return h_estimate.detach()\n\n    def update(self, minibatches, unlabeled=None):\n\n        loss_swap = 0.0\n        trm = 0.0\n\n        if self.update_count >= self.hparams['iters']:\n            # TRM\n            if self.hparams['class_balanced']:\n                # for stability when facing unbalanced labels across environments\n                for classifier in self.clist:\n                    classifier.weight.data = copy.deepcopy(self.classifier.weight.data)\n            self.alpha /= self.alpha.sum(1, keepdim=True)\n\n            self.featurizer.train()\n            all_x = torch.cat([x for x, y in minibatches])\n            all_y = torch.cat([y for x, y in minibatches])\n            all_feature = self.featurizer(all_x)\n            # updating original network\n            loss = F.cross_entropy(self.classifier(all_feature), all_y)\n\n            for i in range(30):\n                all_logits_idx = 0\n                loss_erm = 0.\n                for j, (x, y) in enumerate(minibatches):\n                    # j-th domain\n                    feature = all_feature[all_logits_idx:all_logits_idx + x.shape[0]]\n                    all_logits_idx += x.shape[0]\n                    loss_erm += F.cross_entropy(self.clist[j](feature.detach()), y)\n                for opt in self.olist:\n                    opt.zero_grad()\n                loss_erm.backward()\n                for opt in self.olist:\n                    opt.step()\n\n            # collect (feature, y)\n            feature_split = list()\n            y_split = list()\n            all_logits_idx = 0\n            for i, (x, y) in enumerate(minibatches):\n                feature = all_feature[all_logits_idx:all_logits_idx + x.shape[0]]\n                all_logits_idx += x.shape[0]\n                feature_split.append(feature)\n                y_split.append(y)\n\n            # estimate transfer risk\n            for Q, (x, y) in enumerate(minibatches):\n                sample_list = list(range(len(minibatches)))\n                sample_list.remove(Q)\n\n                loss_Q = F.cross_entropy(self.clist[Q](feature_split[Q]), y_split[Q])\n                grad_Q = autograd.grad(loss_Q, self.clist[Q].weight, create_graph=True)\n                vec_grad_Q = nn.utils.parameters_to_vector(grad_Q)\n\n                loss_P = [F.cross_entropy(self.clist[Q](feature_split[i]), y_split[i])*(self.alpha[Q, i].data.detach())\n                          if i in sample_list else 0. for i in range(len(minibatches))]\n                loss_P_sum = sum(loss_P)\n                grad_P = autograd.grad(loss_P_sum, self.clist[Q].weight, create_graph=True)\n                vec_grad_P = nn.utils.parameters_to_vector(grad_P).detach()\n                vec_grad_P = self.neum(vec_grad_P, self.clist[Q], (feature_split[Q], y_split[Q]))\n\n                loss_swap += loss_P_sum - self.hparams['cos_lambda'] * (vec_grad_P.detach() @ vec_grad_Q)\n\n                for i in sample_list:\n                    self.alpha[Q, i] *= (self.hparams[\"groupdro_eta\"] * loss_P[i].data).exp()\n\n            loss_swap /= len(minibatches)\n            trm /= len(minibatches)\n        else:\n            # ERM\n            self.featurizer.train()\n            all_x = torch.cat([x for x, y in minibatches])\n            all_y = torch.cat([y for x, y in minibatches])\n            all_feature = self.featurizer(all_x)\n            loss = F.cross_entropy(self.classifier(all_feature), all_y)\n\n        nll = loss.item()\n        self.optimizer_c.zero_grad()\n        self.optimizer_f.zero_grad()\n        if self.update_count >= self.hparams['iters']:\n            loss_swap = (loss + loss_swap)\n        else:\n            loss_swap = loss\n\n        loss_swap.backward()\n        self.optimizer_f.step()\n        self.optimizer_c.step()\n\n        loss_swap = loss_swap.item() - nll\n        self.update_count += 1\n\n        return {'nll': nll, 'trm_loss': loss_swap}\n\n    def predict(self, x):\n        return self.classifier(self.featurizer(x))\n\n    def train(self):\n        self.featurizer.train()\n\n    def eval(self):\n        self.featurizer.eval()\n\nclass IB_ERM(ERM):\n    \"\"\"Information Bottleneck based ERM on feature with conditionning\"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(IB_ERM, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams)\n        self.optimizer = torch.optim.Adam(\n            list(self.featurizer.parameters()) + list(self.classifier.parameters()),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay']\n        )\n        self.register_buffer('update_count', torch.tensor([0]))\n\n    def update(self, minibatches, unlabeled=None):\n        device = \"cuda\" if minibatches[0][0].is_cuda else \"cpu\"\n        ib_penalty_weight = (self.hparams['ib_lambda'] if self.update_count\n                          >= self.hparams['ib_penalty_anneal_iters'] else\n                          0.0)\n\n        nll = 0.\n        ib_penalty = 0.\n\n        all_x = torch.cat([x for x, y in minibatches])\n        all_features = self.featurizer(all_x)\n        all_logits = self.classifier(all_features)\n        all_logits_idx = 0\n        for i, (x, y) in enumerate(minibatches):\n            features = all_features[all_logits_idx:all_logits_idx + x.shape[0]]\n            logits = all_logits[all_logits_idx:all_logits_idx + x.shape[0]]\n            all_logits_idx += x.shape[0]\n            nll += F.cross_entropy(logits, y)\n            ib_penalty += features.var(dim=0).mean()\n\n        nll /= len(minibatches)\n        ib_penalty /= len(minibatches)\n\n        # Compile loss\n        loss = nll\n        loss += ib_penalty_weight * ib_penalty\n\n        if self.update_count == self.hparams['ib_penalty_anneal_iters']:\n            # Reset Adam, because it doesn't like the sharp jump in gradient\n            # magnitudes that happens at this step.\n            self.optimizer = torch.optim.Adam(\n                list(self.featurizer.parameters()) + list(self.classifier.parameters()),\n                lr=self.hparams[\"lr\"],\n                weight_decay=self.hparams['weight_decay'])\n\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        self.update_count += 1\n        return {'loss': loss.item(),\n                'nll': nll.item(),\n                'IB_penalty': ib_penalty.item()}\n\nclass IB_IRM(ERM):\n    \"\"\"Information Bottleneck based IRM on feature with conditionning\"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(IB_IRM, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams)\n        self.optimizer = torch.optim.Adam(\n            list(self.featurizer.parameters()) + list(self.classifier.parameters()),\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay']\n        )\n        self.register_buffer('update_count', torch.tensor([0]))\n\n    @staticmethod\n    def _irm_penalty(logits, y):\n        device = \"cuda\" if logits[0][0].is_cuda else \"cpu\"\n        scale = torch.tensor(1.).to(device).requires_grad_()\n        loss_1 = F.cross_entropy(logits[::2] * scale, y[::2])\n        loss_2 = F.cross_entropy(logits[1::2] * scale, y[1::2])\n        grad_1 = autograd.grad(loss_1, [scale], create_graph=True)[0]\n        grad_2 = autograd.grad(loss_2, [scale], create_graph=True)[0]\n        result = torch.sum(grad_1 * grad_2)\n        return result\n\n    def update(self, minibatches, unlabeled=None):\n        device = \"cuda\" if minibatches[0][0].is_cuda else \"cpu\"\n        irm_penalty_weight = (self.hparams['irm_lambda'] if self.update_count\n                          >= self.hparams['irm_penalty_anneal_iters'] else\n                          1.0)\n        ib_penalty_weight = (self.hparams['ib_lambda'] if self.update_count\n                          >= self.hparams['ib_penalty_anneal_iters'] else\n                          0.0)\n\n        nll = 0.\n        irm_penalty = 0.\n        ib_penalty = 0.\n\n        all_x = torch.cat([x for x, y in minibatches])\n        all_features = self.featurizer(all_x)\n        all_logits = self.classifier(all_features)\n        all_logits_idx = 0\n        for i, (x, y) in enumerate(minibatches):\n            features = all_features[all_logits_idx:all_logits_idx + x.shape[0]]\n            logits = all_logits[all_logits_idx:all_logits_idx + x.shape[0]]\n            all_logits_idx += x.shape[0]\n            nll += F.cross_entropy(logits, y)\n            irm_penalty += self._irm_penalty(logits, y)\n            ib_penalty += features.var(dim=0).mean()\n\n        nll /= len(minibatches)\n        irm_penalty /= len(minibatches)\n        ib_penalty /= len(minibatches)\n\n        # Compile loss\n        loss = nll\n        loss += irm_penalty_weight * irm_penalty\n        loss += ib_penalty_weight * ib_penalty\n\n        if self.update_count == self.hparams['irm_penalty_anneal_iters'] or self.update_count == self.hparams['ib_penalty_anneal_iters']:\n            # Reset Adam, because it doesn't like the sharp jump in gradient\n            # magnitudes that happens at this step.\n            self.optimizer = torch.optim.Adam(\n                list(self.featurizer.parameters()) + list(self.classifier.parameters()),\n                lr=self.hparams[\"lr\"],\n                weight_decay=self.hparams['weight_decay'])\n\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        self.update_count += 1\n        return {'loss': loss.item(),\n                'nll': nll.item(),\n                'IRM_penalty': irm_penalty.item(),\n                'IB_penalty': ib_penalty.item()}\n\n\nclass AbstractCAD(Algorithm):\n    \"\"\"Contrastive adversarial domain bottleneck (abstract class)\n    from Optimal Representations for Covariate Shift <https://arxiv.org/abs/2201.00057>\n    \"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains,\n                 hparams, is_conditional):\n        super(AbstractCAD, self).__init__(input_shape, num_classes, num_domains, hparams)\n\n        self.featurizer = networks.Featurizer(input_shape, self.hparams)\n        self.classifier = networks.Classifier(\n            self.featurizer.n_outputs,\n            num_classes,\n            self.hparams['nonlinear_classifier'])\n        params = list(self.featurizer.parameters()) + list(self.classifier.parameters())\n\n        # parameters for domain bottleneck loss\n        self.is_conditional = is_conditional  # whether to use bottleneck conditioned on the label\n        self.base_temperature = 0.07\n        self.temperature = hparams['temperature']\n        self.is_project = hparams['is_project']  # whether apply projection head\n        self.is_normalized = hparams['is_normalized'] # whether apply normalization to representation when computing loss\n\n        # whether flip maximize log(p) (False) to minimize -log(1-p) (True) for the bottleneck loss\n        # the two versions have the same optima, but we find the latter is more stable\n        self.is_flipped = hparams[\"is_flipped\"]\n\n        if self.is_project:\n            self.project = nn.Sequential(\n                nn.Linear(feature_dim, feature_dim),\n                nn.ReLU(inplace=True),\n                nn.Linear(feature_dim, 128),\n            )\n            params += list(self.project.parameters())\n\n        # Optimizers\n        self.optimizer = torch.optim.Adam(\n            params,\n            lr=self.hparams[\"lr\"],\n            weight_decay=self.hparams['weight_decay']\n        )\n\n    def bn_loss(self, z, y, dom_labels):\n        \"\"\"Contrastive based domain bottleneck loss\n         The implementation is based on the supervised contrastive loss (SupCon) introduced by\n         P. Khosla, et al., in “Supervised Contrastive Learning“.\n        Modified from  https://github.com/HobbitLong/SupContrast/blob/8d0963a7dbb1cd28accb067f5144d61f18a77588/losses.py#L11\n        \"\"\"\n        device = z.device\n        batch_size = z.shape[0]\n\n        y = y.contiguous().view(-1, 1)\n        dom_labels = dom_labels.contiguous().view(-1, 1)\n        mask_y = torch.eq(y, y.T).to(device)\n        mask_d = (torch.eq(dom_labels, dom_labels.T)).to(device)\n        mask_drop = ~torch.eye(batch_size).bool().to(device)  # drop the \"current\"/\"self\" example\n        mask_y &= mask_drop\n        mask_y_n_d = mask_y & (~mask_d)  # contain the same label but from different domains\n        mask_y_d = mask_y & mask_d  # contain the same label and the same domain\n        mask_y, mask_drop, mask_y_n_d, mask_y_d = mask_y.float(), mask_drop.float(), mask_y_n_d.float(), mask_y_d.float()\n\n        # compute logits\n        if self.is_project:\n            z = self.project(z)\n        if self.is_normalized:\n            z = F.normalize(z, dim=1)\n        outer = z @ z.T\n        logits = outer / self.temperature\n        logits = logits * mask_drop\n        # for numerical stability\n        logits_max, _ = torch.max(logits, dim=1, keepdim=True)\n        logits = logits - logits_max.detach()\n\n        if not self.is_conditional:\n            # unconditional CAD loss\n            denominator = torch.logsumexp(logits + mask_drop.log(), dim=1, keepdim=True)\n            log_prob = logits - denominator\n\n            mask_valid = (mask_y.sum(1) > 0)\n            log_prob = log_prob[mask_valid]\n            mask_d = mask_d[mask_valid]\n\n            if self.is_flipped:  # maximize log prob of samples from different domains\n                bn_loss = - (self.temperature / self.base_temperature) * torch.logsumexp(\n                    log_prob + (~mask_d).float().log(), dim=1)\n            else:  # minimize log prob of samples from same domain\n                bn_loss = (self.temperature / self.base_temperature) * torch.logsumexp(\n                    log_prob + (mask_d).float().log(), dim=1)\n        else:\n            # conditional CAD loss\n            if self.is_flipped:\n                mask_valid = (mask_y_n_d.sum(1) > 0)\n            else:\n                mask_valid = (mask_y_d.sum(1) > 0)\n\n            mask_y = mask_y[mask_valid]\n            mask_y_d = mask_y_d[mask_valid]\n            mask_y_n_d = mask_y_n_d[mask_valid]\n            logits = logits[mask_valid]\n\n            # compute log_prob_y with the same label\n            denominator = torch.logsumexp(logits + mask_y.log(), dim=1, keepdim=True)\n            log_prob_y = logits - denominator\n\n            if self.is_flipped:  # maximize log prob of samples from different domains and with same label\n                bn_loss = - (self.temperature / self.base_temperature) * torch.logsumexp(\n                    log_prob_y + mask_y_n_d.log(), dim=1)\n            else:  # minimize log prob of samples from same domains and with same label\n                bn_loss = (self.temperature / self.base_temperature) * torch.logsumexp(\n                    log_prob_y + mask_y_d.log(), dim=1)\n\n        def finite_mean(x):\n            # only 1D for now\n            num_finite = (torch.isfinite(x).float()).sum()\n            mean = torch.where(torch.isfinite(x), x, torch.tensor(0.0).to(x)).sum()\n            if num_finite != 0:\n                mean = mean / num_finite\n            else:\n                return torch.tensor(0.0).to(x)\n            return mean\n\n        return finite_mean(bn_loss)\n\n    def update(self, minibatches, unlabeled=None):\n        device = \"cuda\" if minibatches[0][0].is_cuda else \"cpu\"\n        all_x = torch.cat([x for x, y in minibatches])\n        all_y = torch.cat([y for x, y in minibatches])\n        all_z = self.featurizer(all_x)\n        all_d = torch.cat([\n            torch.full((x.shape[0],), i, dtype=torch.int64, device=device)\n            for i, (x, y) in enumerate(minibatches)\n        ])\n\n        bn_loss = self.bn_loss(all_z, all_y, all_d)\n        clf_out = self.classifier(all_z)\n        clf_loss = F.cross_entropy(clf_out, all_y)\n        total_loss = clf_loss + self.hparams['lmbda'] * bn_loss\n\n        self.optimizer.zero_grad()\n        total_loss.backward()\n        self.optimizer.step()\n\n        return {\"clf_loss\": clf_loss.item(), \"bn_loss\": bn_loss.item(), \"total_loss\": total_loss.item()}\n\n    def predict(self, x):\n        return self.classifier(self.featurizer(x))\n\n\nclass CAD(AbstractCAD):\n    \"\"\"Contrastive Adversarial Domain (CAD) bottleneck\n\n       Properties:\n       - Minimize I(D;Z)\n       - Require access to domain labels but not task labels\n       \"\"\"\n\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(CAD, self).__init__(input_shape, num_classes, num_domains, hparams, is_conditional=False)\n\n\nclass CondCAD(AbstractCAD):\n    \"\"\"Conditional Contrastive Adversarial Domain (CAD) bottleneck\n\n    Properties:\n    - Minimize I(D;Z|Y)\n    - Require access to both domain labels and task labels\n    \"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(CondCAD, self).__init__(input_shape, num_classes, num_domains, hparams, is_conditional=True)\n\n\nclass Transfer(Algorithm):\n    '''Algorithm 1 in Quantifying and Improving Transferability in Domain Generalization (https://arxiv.org/abs/2106.03632)'''\n    ''' tries to ensure transferability among source domains, and thus transferabiilty between source and target'''\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(Transfer, self).__init__(input_shape, num_classes, num_domains, hparams)\n        self.register_buffer('update_count', torch.tensor([0]))\n        self.d_steps_per_g = hparams['d_steps_per_g']\n\n        # Architecture\n        self.featurizer = networks.Featurizer(input_shape, self.hparams)\n        self.classifier = networks.Classifier(\n            self.featurizer.n_outputs,\n            num_classes,\n            self.hparams['nonlinear_classifier'])\n        self.adv_classifier = networks.Classifier(\n            self.featurizer.n_outputs,\n            num_classes,\n            self.hparams['nonlinear_classifier'])\n        self.adv_classifier.load_state_dict(self.classifier.state_dict())\n\n        # Optimizers\n        if self.hparams['gda']:\n            self.optimizer = torch.optim.SGD(self.adv_classifier.parameters(), lr=self.hparams['lr'])\n        else:\n            self.optimizer = torch.optim.Adam(\n            (list(self.featurizer.parameters()) + list(self.classifier.parameters())),\n                lr=self.hparams[\"lr\"],\n                weight_decay=self.hparams['weight_decay'])\n\n        self.adv_opt = torch.optim.SGD(self.adv_classifier.parameters(), lr=self.hparams['lr_d'])\n\n    def loss_gap(self, minibatches, device):\n        ''' compute gap = max_i loss_i(h) - min_j loss_j(h), return i, j, and the gap for a single batch'''\n        max_env_loss, min_env_loss =  torch.tensor([-float('inf')], device=device), torch.tensor([float('inf')], device=device)\n        for x, y in minibatches:\n            p = self.adv_classifier(self.featurizer(x))\n            loss = F.cross_entropy(p, y)\n            if loss > max_env_loss:\n                max_env_loss = loss\n            if loss < min_env_loss:\n                min_env_loss = loss\n        return max_env_loss - min_env_loss\n\n    def update(self, minibatches, unlabeled=None):\n        device = \"cuda\" if minibatches[0][0].is_cuda else \"cpu\"\n        # outer loop\n        all_x = torch.cat([x for x, y in minibatches])\n        all_y = torch.cat([y for x, y in minibatches])\n        loss = F.cross_entropy(self.predict(all_x), all_y)\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        del all_x, all_y\n        gap = self.hparams['t_lambda'] * self.loss_gap(minibatches, device)\n        self.optimizer.zero_grad()\n        gap.backward()\n        self.optimizer.step()\n        self.adv_classifier.load_state_dict(self.classifier.state_dict())\n        for _ in range(self.d_steps_per_g):\n            self.adv_opt.zero_grad()\n            gap = -self.hparams['t_lambda'] * self.loss_gap(minibatches, device)\n            gap.backward()\n            self.adv_opt.step()\n            self.adv_classifier = proj(self.hparams['delta'], self.adv_classifier, self.classifier)\n        return {'loss': loss.item(), 'gap': -gap.item()}\n\n    def update_second(self, minibatches, unlabeled=None):\n        device = \"cuda\" if minibatches[0][0].is_cuda else \"cpu\"\n        self.update_count = (self.update_count + 1) % (1 + self.d_steps_per_g)\n        if self.update_count.item() == 1:\n            all_x = torch.cat([x for x, y in minibatches])\n            all_y = torch.cat([y for x, y in minibatches])\n            loss = F.cross_entropy(self.predict(all_x), all_y)\n            self.optimizer.zero_grad()\n            loss.backward()\n            self.optimizer.step()\n\n            del all_x, all_y\n            gap = self.hparams['t_lambda'] * self.loss_gap(minibatches, device)\n            self.optimizer.zero_grad()\n            gap.backward()\n            self.optimizer.step()\n            self.adv_classifier.load_state_dict(self.classifier.state_dict())\n            return {'loss': loss.item(), 'gap': gap.item()}\n        else:\n            self.adv_opt.zero_grad()\n            gap = -self.hparams['t_lambda'] * self.loss_gap(minibatches, device)\n            gap.backward()\n            self.adv_opt.step()\n            self.adv_classifier = proj(self.hparams['delta'], self.adv_classifier, self.classifier)\n            return {'gap': -gap.item()}\n\n\n    def predict(self, x):\n        return self.classifier(self.featurizer(x))\n\n\nclass AbstractCausIRL(ERM):\n    '''Abstract class for Causality based invariant representation learning algorithm from (https://arxiv.org/abs/2206.11646)'''\n    def __init__(self, input_shape, num_classes, num_domains, hparams, gaussian):\n        super(AbstractCausIRL, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams)\n        if gaussian:\n            self.kernel_type = \"gaussian\"\n        else:\n            self.kernel_type = \"mean_cov\"\n\n    def my_cdist(self, x1, x2):\n        x1_norm = x1.pow(2).sum(dim=-1, keepdim=True)\n        x2_norm = x2.pow(2).sum(dim=-1, keepdim=True)\n        res = torch.addmm(x2_norm.transpose(-2, -1),\n                          x1,\n                          x2.transpose(-2, -1), alpha=-2).add_(x1_norm)\n        return res.clamp_min_(1e-30)\n\n    def gaussian_kernel(self, x, y, gamma=[0.001, 0.01, 0.1, 1, 10, 100,\n                                           1000]):\n        D = self.my_cdist(x, y)\n        K = torch.zeros_like(D)\n\n        for g in gamma:\n            K.add_(torch.exp(D.mul(-g)))\n\n        return K\n\n    def mmd(self, x, y):\n        if self.kernel_type == \"gaussian\":\n            Kxx = self.gaussian_kernel(x, x).mean()\n            Kyy = self.gaussian_kernel(y, y).mean()\n            Kxy = self.gaussian_kernel(x, y).mean()\n            return Kxx + Kyy - 2 * Kxy\n        else:\n            mean_x = x.mean(0, keepdim=True)\n            mean_y = y.mean(0, keepdim=True)\n            cent_x = x - mean_x\n            cent_y = y - mean_y\n            cova_x = (cent_x.t() @ cent_x) / (len(x) - 1)\n            cova_y = (cent_y.t() @ cent_y) / (len(y) - 1)\n\n            mean_diff = (mean_x - mean_y).pow(2).mean()\n            cova_diff = (cova_x - cova_y).pow(2).mean()\n\n            return mean_diff + cova_diff\n\n    def update(self, minibatches, unlabeled=None):\n        objective = 0\n        penalty = 0\n        nmb = len(minibatches)\n\n        features = [self.featurizer(xi) for xi, _ in minibatches]\n        classifs = [self.classifier(fi) for fi in features]\n        targets = [yi for _, yi in minibatches]\n\n        first = None\n        second = None\n\n        for i in range(nmb):\n            objective += F.cross_entropy(classifs[i] + 1e-16, targets[i])\n            slice = np.random.randint(0, len(features[i]))\n            if first is None:\n                first = features[i][:slice]\n                second = features[i][slice:]\n            else:\n                first = torch.cat((first, features[i][:slice]), 0)\n                second = torch.cat((second, features[i][slice:]), 0)\n        if len(first) > 1 and len(second) > 1:\n            penalty = torch.nan_to_num(self.mmd(first, second))\n        else:\n            penalty = torch.tensor(0)\n        objective /= nmb\n\n        self.optimizer.zero_grad()\n        (objective + (self.hparams['mmd_gamma']*penalty)).backward()\n        self.optimizer.step()\n\n        if torch.is_tensor(penalty):\n            penalty = penalty.item()\n\n        return {'loss': objective.item(), 'penalty': penalty}\n\n\nclass CausIRL_MMD(AbstractCausIRL):\n    '''Causality based invariant representation learning algorithm using the MMD distance from (https://arxiv.org/abs/2206.11646)'''\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(CausIRL_MMD, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams, gaussian=True)\n\n\nclass CausIRL_CORAL(AbstractCausIRL):\n    '''Causality based invariant representation learning algorithm using the CORAL distance from (https://arxiv.org/abs/2206.11646)'''\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(CausIRL_CORAL, self).__init__(input_shape, num_classes, num_domains,\n                                  hparams, gaussian=False)\n\n\nclass EQRM(ERM):\n    \"\"\"\n    Empirical Quantile Risk Minimization (EQRM).\n    Algorithm 1 from [https://arxiv.org/pdf/2207.09944.pdf].\n    \"\"\"\n    def __init__(self, input_shape, num_classes, num_domains, hparams, dist=None):\n        super().__init__(input_shape, num_classes, num_domains, hparams)\n        self.register_buffer('update_count', torch.tensor([0]))\n        self.register_buffer('alpha', torch.tensor(self.hparams[\"eqrm_quantile\"], dtype=torch.float64))\n        if dist is None:\n            self.dist = Nonparametric()\n        else:\n            self.dist = dist\n\n    def risk(self, x, y):\n        return F.cross_entropy(self.network(x), y).reshape(1)\n\n    def update(self, minibatches, unlabeled=None):\n        env_risks = torch.cat([self.risk(x, y) for x, y in minibatches])\n\n        if self.update_count < self.hparams[\"eqrm_burnin_iters\"]:\n            # Burn-in/annealing period uses ERM like penalty methods (which set penalty_weight=0, e.g. IRM, VREx.)\n            loss = torch.mean(env_risks)\n        else:\n            # Loss is the alpha-quantile value\n            self.dist.estimate_parameters(env_risks)\n            loss = self.dist.icdf(self.alpha)\n\n        if self.update_count == self.hparams['eqrm_burnin_iters']:\n            # Reset Adam (like IRM, VREx, etc.), because it doesn't like the sharp jump in\n            # gradient magnitudes that happens at this step.\n            self.optimizer = torch.optim.Adam(\n                self.network.parameters(),\n                lr=self.hparams[\"eqrm_lr\"],\n                weight_decay=self.hparams['weight_decay'])\n\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        self.update_count += 1\n\n        return {'loss': loss.item()}\n\n\nclass ADRMX(Algorithm):\n    '''ADRMX: Additive Disentanglement of Domain Features with Remix Loss from (https://arxiv.org/abs/2308.06624)'''\n    def __init__(self, input_shape, num_classes, num_domains, hparams):\n        super(ADRMX, self).__init__(input_shape, num_classes, num_domains,\n                                   hparams)\n        self.register_buffer('update_count', torch.tensor([0]))\n        \n        self.num_classes = num_classes\n        self.num_domains = num_domains\n        self.mix_num = 1\n        self.scl_int = SupConLossLambda(lamda=0.5)\n        self.scl_final = SupConLossLambda(lamda=0.5)\n        \n        self.featurizer_label = networks.Featurizer(input_shape, self.hparams)\n        self.featurizer_domain = networks.Featurizer(input_shape, self.hparams)\n\n        self.discriminator = networks.MLP(self.featurizer_domain.n_outputs,\n            num_domains, self.hparams)\n\n        self.classifier_label_1 = networks.Classifier(\n            self.featurizer_label.n_outputs,\n            num_classes,\n            is_nonlinear=True)\n\n        self.classifier_label_2 = networks.Classifier(\n            self.featurizer_label.n_outputs,\n            num_classes,\n            is_nonlinear=True)\n\n        self.classifier_domain = networks.Classifier(\n            self.featurizer_domain.n_outputs,\n            num_domains,\n            is_nonlinear=True)\n\n\n        self.network = nn.Sequential(self.featurizer_label, self.classifier_label_1)\n\n        self.disc_opt = torch.optim.Adam(\n            (list(self.discriminator.parameters())),\n            lr=self.hparams[\"lr\"],\n            betas=(self.hparams['beta1'], 0.9))\n\n        self.opt = torch.optim.Adam(\n            (list(self.featurizer_label.parameters()) +\n             list(self.featurizer_domain.parameters()) +\n             list(self.classifier_label_1.parameters()) +\n                list(self.classifier_label_2.parameters()) +\n                list(self.classifier_domain.parameters())),\n            lr=self.hparams[\"lr\"],\n            betas=(self.hparams['beta1'], 0.9))\n\n        \n    def update(self, minibatches, unlabeled=None):\n\n        self.update_count += 1\n        all_x = torch.cat([x for x, _ in minibatches])\n        all_y = torch.cat([y for _, y in minibatches])\n\n        feat_label = self.featurizer_label(all_x)\n        feat_domain = self.featurizer_domain(all_x)\n        feat_combined = feat_label - feat_domain\n\n        # get domain labels\n        disc_labels = torch.cat([\n            torch.full((x.shape[0], ), i, dtype=torch.int64, device=all_x.device)\n            for i, (x, _) in enumerate(minibatches)\n        ])\n        # predict domain feats from disentangled features\n        disc_out = self.discriminator(feat_combined) \n        disc_loss = F.cross_entropy(disc_out, disc_labels) # discriminative loss for final labels (ascend/descend)\n\n        d_steps_per_g = self.hparams['d_steps_per_g_step']\n        # alternating losses\n        if (self.update_count.item() % (1+d_steps_per_g) < d_steps_per_g):\n            # in discriminator turn\n            self.disc_opt.zero_grad()\n            disc_loss.backward()\n            self.disc_opt.step()\n            return {'loss_disc': disc_loss.item()}\n        else:\n            # in generator turn\n\n            # calculate CE from x_domain\n            domain_preds = self.classifier_domain(feat_domain)\n            classifier_loss_domain = F.cross_entropy(domain_preds, disc_labels) # domain clf loss\n            classifier_remixed_loss = 0\n\n            # calculate CE and contrastive loss from x_label\n            int_preds = self.classifier_label_1(feat_label)\n            classifier_loss_int = F.cross_entropy(int_preds, all_y) # intermediate CE Loss\n            cnt_loss_int = self.scl_int(feat_label, all_y, disc_labels)\n\n            # calculate CE and contrastive loss from x_dinv\n            final_preds = self.classifier_label_2(feat_combined)\n            classifier_loss_final = F.cross_entropy(final_preds, all_y) # final CE Loss\n            cnt_loss_final = self.scl_final(feat_combined, all_y, disc_labels)\n\n            # remix strategy\n            for i in range(self.num_classes):\n                indices = torch.where(all_y == i)[0]\n                for _ in range(self.mix_num):\n                    # get two instances from same class with different domains\n                    perm = torch.randperm(indices.numel())\n                    if len(perm) < 2:\n                        continue\n                    idx1, idx2 = perm[:2]\n                    # remix\n                    remixed_feat = feat_combined[idx1] + feat_domain[idx2]\n                    # make prediction\n                    pred = self.classifier_label_1(remixed_feat.view(1,-1))\n                    # accumulate the loss\n                    classifier_remixed_loss += F.cross_entropy(pred.view(1, -1), all_y[idx1].view(-1))\n            # normalize\n            classifier_remixed_loss /= (self.num_classes * self.mix_num)\n\n            # generator loss negates the discrimination loss (negative update)\n            gen_loss = (classifier_loss_int +\n                        classifier_loss_final +\n                        self.hparams[\"dclf_lambda\"] * classifier_loss_domain +\n                        self.hparams[\"rmxd_lambda\"] * classifier_remixed_loss +\n                        self.hparams['cnt_lambda'] * (cnt_loss_int + cnt_loss_final) + \n                        (self.hparams['disc_lambda'] * -disc_loss))\n            self.disc_opt.zero_grad()\n            self.opt.zero_grad()\n            gen_loss.backward()\n            self.opt.step()\n\n            return {'loss_total': gen_loss.item(), \n                'loss_cnt_int': cnt_loss_int.item(),\n                'loss_cnt_final': cnt_loss_final.item(),\n                'loss_clf_int': classifier_loss_int.item(), \n                'loss_clf_fin': classifier_loss_final.item(), \n                'loss_dmn': classifier_loss_domain.item(), \n                'loss_disc': disc_loss.item(),\n                'loss_remixed': classifier_remixed_loss.item(),\n                }\n    \n    def predict(self, x):\n        return self.network(x)"
  },
  {
    "path": "transopt/benchmark/HPOOOD/collect_results.py",
    "content": "import os\nimport numpy as np\nimport json\nimport pandas as pd\nimport re\n\nimport matplotlib.pyplot as plt\n\n\n\nout_put_dir = '/home/cola/transopt_files/output1/results'\nanalysis_dir = './analysis_res/'\n\n\n\ndef find_jsonl_files(directory):\n    jsonl_files = []\n    # 遍历指定目录及其子目录\n    for root, dirs, files in os.walk(directory):\n        for file in files:\n            if file.endswith(\".jsonl\"):\n                jsonl_files.append(os.path.join(root, file))\n    \n    # 按照文件名中的数字序号进行排序\n    jsonl_files.sort(key=lambda x: int(re.search(r'/(\\d+)_', x).group(1)))\n    \n    return jsonl_files\n\ndef find_dirs(directory):\n    dir_files = []\n    # 遍历指定目录及其子目录\n    for root, dirs, files in os.walk(directory):\n        for dir in dirs:\n            dir_files.append(os.path.join(root, dir))\n    return dir_files\n\ndef remove_empty_directories(directory):\n    # 遍历指定目录下的所有子目录\n    for root, dirs, files in os.walk(directory, topdown=False):\n        for dir_name in dirs:\n            dir_path = os.path.join(root, dir_name)\n            # 检查目录是否为空\n            if not os.listdir(dir_path):\n                # 如果目录为空，则删除\n                print(f\"Removing empty directory: {dir_path}\")\n                os.rmdir(dir_path)\n\n# remove_empty_directories(out_put_dir)\n# print(find_dirs(out_put_dir))\n\ndef plot_bins(test_data, val_data, save_file_name):\n    os.makedirs(analysis_dir + 'bins/', exist_ok=True)\n    \n    plt.clf()\n    bins = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]\n    plt.hist(test_data, bins=bins, alpha=0.5, label='test acc', color='blue', edgecolor='black')\n    plt.hist(val_data, bins=bins, alpha=0.5, label='val acc', color='orange', edgecolor='black')\n\n    \n    plt.savefig(analysis_dir + 'bins/' + save_file_name)\n\n\ndef plot_traj(test_data, val_data, save_file_name):\n    os.makedirs(analysis_dir + 'traj/', exist_ok=True)\n    \n    plt.clf()\n    # test_data = np.maximum.accumulate(test_data).flatten()\n    # val_data = np.maximum.accumulate(val_data).flatten()\n    plt.plot(test_data, label='test acc', color='blue')\n    plt.plot(val_data, label='val acc', color='orange')\n    plt.legend()\n    plt.savefig(analysis_dir + 'traj/' + save_file_name)\n    \n\ndef plot_scatter(x, y, values, save_file_name):\n    os.makedirs(analysis_dir + 'scatter/', exist_ok=True)\n    \n    plt.clf()\n    plt.scatter(x, y, s=100, c=values, cmap='Reds', edgecolor='black')\n    plt.colorbar(label='Value')\n\n\n    # 设置标题和标签\n    plt.title('Scatter Plot with Color Mapping')\n    plt.savefig(analysis_dir + 'scatter/' + save_file_name)\n\n\ndef print_table(table, header_text, row_labels, col_labels, colwidth=10,\n    latex=True):\n    \"\"\"Pretty-print a 2D array of data, optionally with row/col labels\"\"\"\n    print(\"\")\n\n    if latex:\n        num_cols = len(table[0])\n        print(\"\\\\begin{center}\")\n        print(\"\\\\adjustbox{max width=\\\\textwidth}{%\")\n        print(\"\\\\begin{tabular}{l\" + \"c\" * num_cols + \"}\")\n        print(\"\\\\toprule\")\n    else:\n        print(\"--------\", header_text)\n\n    for row, label in zip(table, row_labels):\n        row.insert(0, label)\n\n    if latex:\n        col_labels = [\"\\\\textbf{\" + str(col_label).replace(\"%\", \"\\\\%\") + \"}\"\n            for col_label in col_labels]\n    table.insert(0, col_labels)\n\n    for r, row in enumerate(table):\n        misc.print_row(row, colwidth=colwidth, latex=latex)\n        if latex and r == 0:\n            print(\"\\\\midrule\")\n    if latex:\n        print(\"\\\\bottomrule\")\n        print(\"\\\\end{tabular}}\")\n        print(\"\\\\end{center}\")\n\n\nNN_name = {}\ndatasets = {}\ntest_env = [0,1]\nfor dir_name in find_dirs(out_put_dir):\n    dir_name = dir_name.split('/')[-1]\n    # if 'ERM' in dir_name:\n    #     continue\n    # if 'IRM' in dir_name:\n    #     continue\n    NN_name[dir_name.split('_')[0]] = 1\n    datasets[dir_name.split('_')[1]] = 1\n\ndf = pd.DataFrame(0, index=list(datasets.keys()), columns=list(NN_name.keys()))\ndf2 = pd.DataFrame(0, index=list(datasets.keys()), columns=list(NN_name.keys()))\n\nfor dir_name in find_dirs(out_put_dir):\n    # if 'ERM' in dir_name:\n    #     continue\n    # if 'IRM' in dir_name:\n    #     continue\n    dir_name = dir_name.split('/')[-1]\n    nn_name=dir_name.split('_')[0]\n    dataset_name = dir_name.split('_')[1]\n    best_val_acc = 0\n    best_test_acc = 0\n    best_test_acc2 = 0\n    all_test = []\n    all_valid = []\n    \n    location = []\n    # if 'ColoredMNIST' in dir_name:\n    #     continue\n    for  file_name in find_jsonl_files(out_put_dir + '/' + dir_name):\n        # print(file_name)\n        f_name = file_name.split('/')[-1]\n        weight_decay = float(f_name.split('_')[-1][:-6])\n        lr = float(f_name.split('_')[-4])\n        location.append([lr, weight_decay])\n        with open(file_name, 'r') as f:\n            try:\n                results = json.load(f)\n                print(results)\n                val_acc = []\n                test_acc = []\n                for t_env in test_env:\n                    for k,v in results.items():\n                        if f'env{t_env}_out_acc' == k:\n                            test_acc.append(v)\n\n                for k,v in results.items():\n                    pattern = r'env\\d+_val_acc'\n                    if re.match(pattern, k):\n                        number = int(k[3])\n                        if number not in test_env:\n                            val_acc.append(v)\n\n                val_acc_mean = np.mean(val_acc)\n                test_acc_mean = np.mean(test_acc)\n                \n                all_test.append(test_acc_mean)\n                all_valid.append(val_acc_mean)\n                \n                if test_acc_mean > best_test_acc:\n                    best_test_acc = test_acc_mean\n                \n                if val_acc_mean > best_val_acc:\n                    best_val_acc = val_acc_mean\n                    best_test_acc2 = test_acc_mean\n\n            except:\n                print(f'{file_name} can not open')\n                continue\n    plot_bins(all_test, all_valid, f'{dir_name}.png')\n    plot_traj(all_test, all_valid, f'{dir_name}_traj.png')\n    locations = np.array(location)\n    plot_scatter(locations[:,0], locations[:,1], all_valid, f'{dir_name}_scatter.png')\n    \n    df.at[dataset_name, nn_name] = best_test_acc\n    df2.at[dataset_name, nn_name] = best_test_acc2\nprint(df)\nprint('------------------')\nprint(df2)\n\n\n\n    # with open(file, 'r') as f:\n    #     it = file.split('/')[-1].split('_')[0]\n    #     print(it)\n    #     results = json.load(f)\n"
  },
  {
    "path": "transopt/benchmark/HPOOOD/download.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nfrom collections import defaultdict\nfrom torchvision.datasets import MNIST\nimport xml.etree.ElementTree as ET\nfrom zipfile import ZipFile\nimport argparse\nimport tarfile\nimport shutil\nimport gdown\nimport uuid\nimport json\nimport os\nimport urllib\n\nfrom wilds.datasets.camelyon17_dataset import Camelyon17Dataset\nfrom wilds.datasets.fmow_dataset import FMoWDataset\n\n\n# utils #######################################################################\n\ndef stage_path(data_dir, name):\n    full_path = os.path.join(data_dir, name)\n\n    if not os.path.exists(full_path):\n        os.makedirs(full_path)\n\n    return full_path\n\n\ndef download_and_extract(url, dst, remove=True):\n    gdown.download(url, dst, quiet=False)\n\n    if dst.endswith(\".tar.gz\"):\n        tar = tarfile.open(dst, \"r:gz\")\n        tar.extractall(os.path.dirname(dst))\n        tar.close()\n\n    if dst.endswith(\".tar\"):\n        tar = tarfile.open(dst, \"r:\")\n        tar.extractall(os.path.dirname(dst))\n        tar.close()\n\n    if dst.endswith(\".zip\"):\n        zf = ZipFile(dst, \"r\")\n        zf.extractall(os.path.dirname(dst))\n        zf.close()\n\n    if remove:\n        os.remove(dst)\n\n\n# VLCS ########################################################################\n\n# Slower, but builds dataset from the original sources\n#\n# def download_vlcs(data_dir):\n#     full_path = stage_path(data_dir, \"VLCS\")\n#\n#     tmp_path = os.path.join(full_path, \"tmp/\")\n#     if not os.path.exists(tmp_path):\n#         os.makedirs(tmp_path)\n#\n#     with open(\"domainbed/misc/vlcs_files.txt\", \"r\") as f:\n#         lines = f.readlines()\n#         files = [line.strip().split() for line in lines]\n#\n#     download_and_extract(\"http://pjreddie.com/media/files/VOCtrainval_06-Nov-2007.tar\",\n#                          os.path.join(tmp_path, \"voc2007_trainval.tar\"))\n#\n#     download_and_extract(\"https://drive.google.com/uc?id=1I8ydxaAQunz9R_qFFdBFtw6rFTUW9goz\",\n#                          os.path.join(tmp_path, \"caltech101.tar.gz\"))\n#\n#     download_and_extract(\"http://groups.csail.mit.edu/vision/Hcontext/data/sun09_hcontext.tar\",\n#                          os.path.join(tmp_path, \"sun09_hcontext.tar\"))\n#\n#     tar = tarfile.open(os.path.join(tmp_path, \"sun09.tar\"), \"r:\")\n#     tar.extractall(tmp_path)\n#     tar.close()\n#\n#     for src, dst in files:\n#         class_folder = os.path.join(data_dir, dst)\n#\n#         if not os.path.exists(class_folder):\n#             os.makedirs(class_folder)\n#\n#         dst = os.path.join(class_folder, uuid.uuid4().hex + \".jpg\")\n#\n#         if \"labelme\" in src:\n#             # download labelme from the web\n#             gdown.download(src, dst, quiet=False)\n#         else:\n#             src = os.path.join(tmp_path, src)\n#             shutil.copyfile(src, dst)\n#\n#     shutil.rmtree(tmp_path)\n\n\ndef download_vlcs(data_dir):\n    # Original URL: http://www.eecs.qmul.ac.uk/~dl307/project_iccv2017\n    full_path = stage_path(data_dir, \"VLCS\")\n\n    download_and_extract(\"https://drive.google.com/uc?id=1skwblH1_okBwxWxmRsp9_qi15hyPpxg8\",\n                         os.path.join(data_dir, \"VLCS.tar.gz\"))\n\n\n# MNIST #######################################################################\n\ndef download_mnist(data_dir):\n    # Original URL: http://yann.lecun.com/exdb/mnist/\n    full_path = stage_path(data_dir, \"MNIST\")\n    MNIST(full_path, download=True)\n\n\n# PACS ########################################################################\n\ndef download_pacs(data_dir):\n    # Original URL: http://www.eecs.qmul.ac.uk/~dl307/project_iccv2017\n    full_path = stage_path(data_dir, \"PACS\")\n\n    download_and_extract(\"https://drive.google.com/uc?id=1JFr8f805nMUelQWWmfnJR3y4_SYoN5Pd\",\n                         os.path.join(data_dir, \"PACS.zip\"))\n\n    os.rename(os.path.join(data_dir, \"kfold\"),\n              full_path)\n\n\n# Office-Home #################################################################\n\ndef download_office_home(data_dir):\n    # Original URL: http://hemanthdv.org/OfficeHome-Dataset/\n    full_path = stage_path(data_dir, \"office_home\")\n\n    download_and_extract(\"https://drive.google.com/uc?id=1uY0pj7oFsjMxRwaD3Sxy0jgel0fsYXLC\",\n                         os.path.join(data_dir, \"office_home.zip\"))\n\n    os.rename(os.path.join(data_dir, \"OfficeHomeDataset_10072016\"),\n              full_path)\n\n\n# DomainNET ###################################################################\n\ndef download_domain_net(data_dir):\n    # Original URL: http://ai.bu.edu/M3SDA/\n    full_path = stage_path(data_dir, \"domain_net\")\n\n    urls = [\n        \"http://csr.bu.edu/ftp/visda/2019/multi-source/groundtruth/clipart.zip\",\n        \"http://csr.bu.edu/ftp/visda/2019/multi-source/infograph.zip\",\n        \"http://csr.bu.edu/ftp/visda/2019/multi-source/groundtruth/painting.zip\",\n        \"http://csr.bu.edu/ftp/visda/2019/multi-source/quickdraw.zip\",\n        \"http://csr.bu.edu/ftp/visda/2019/multi-source/real.zip\",\n        \"http://csr.bu.edu/ftp/visda/2019/multi-source/sketch.zip\"\n    ]\n\n    for url in urls:\n        download_and_extract(url, os.path.join(full_path, url.split(\"/\")[-1]))\n\n    with open(\"domainbed/misc/domain_net_duplicates.txt\", \"r\") as f:\n        for line in f.readlines():\n            try:\n                os.remove(os.path.join(full_path, line.strip()))\n            except OSError:\n                pass\n\n\n# TerraIncognita ##############################################################\n\ndef download_terra_incognita(data_dir):\n    # Original URL: https://beerys.github.io/CaltechCameraTraps/\n    # New URL: http://lila.science/datasets/caltech-camera-traps\n\n    full_path = stage_path(data_dir, \"terra_incognita\")\n\n    download_and_extract(\n        \"https://storage.googleapis.com/public-datasets-lila/caltechcameratraps/eccv_18_all_images_sm.tar.gz\",\n        os.path.join(full_path, \"terra_incognita_images.tar.gz\"))\n\n\n    download_and_extract(\n        \"https://storage.googleapis.com/public-datasets-lila/caltechcameratraps/eccv_18_annotations.tar.gz\",\n        os.path.join(full_path, \"eccv_18_annotations.tar.gz\"))\n\n\n    include_locations = [\"38\", \"46\", \"100\", \"43\"]\n\n    include_categories = [\n        \"bird\", \"bobcat\", \"cat\", \"coyote\", \"dog\", \"empty\", \"opossum\", \"rabbit\",\n        \"raccoon\", \"squirrel\"\n    ]\n\n    images_folder = os.path.join(full_path, \"eccv_18_all_images_sm/\")\n    annotations_folder = os.path.join(full_path,\"eccv_18_annotation_files/\")\n    cis_test_annotations_file = os.path.join(full_path, \"eccv_18_annotation_files/cis_test_annotations.json\")\n    cis_val_annotations_file =   os.path.join(full_path, \"eccv_18_annotation_files/cis_val_annotations.json\")\n    train_annotations_file =   os.path.join(full_path, \"eccv_18_annotation_files/train_annotations.json\")\n    trans_test_annotations_file =   os.path.join(full_path, \"eccv_18_annotation_files/trans_test_annotations.json\")\n    trans_val_annotations_file =   os.path.join(full_path, \"eccv_18_annotation_files/trans_val_annotations.json\")\n    annotations_file_list = [cis_test_annotations_file, cis_val_annotations_file, train_annotations_file, trans_test_annotations_file, trans_val_annotations_file]\n    destination_folder = full_path\n\n    stats = {}\n    data = defaultdict(list)\n\n    if not os.path.exists(destination_folder):\n        os.mkdir(destination_folder)\n\n    for annotations_file in annotations_file_list:\n        annots = {}\n        with open(annotations_file, \"r\") as f:\n            annots = json.load(f)\n            for k, v in annots.items():\n                data[k].extend(v)\n\n\n\n    category_dict = {}\n    for item in data['categories']:\n        category_dict[item['id']] = item['name']\n\n    for image in data['images']:\n        image_location = str(image['location'])\n\n        if image_location not in include_locations:\n            continue\n\n        loc_folder = os.path.join(destination_folder,\n                                  'location_' + str(image_location) + '/')\n\n        if not os.path.exists(loc_folder):\n            os.mkdir(loc_folder)\n\n        image_id = image['id']\n        image_fname = image['file_name']\n\n        for annotation in data['annotations']:\n            if annotation['image_id'] == image_id:\n                if image_location not in stats:\n                    stats[image_location] = {}\n\n                category = category_dict[annotation['category_id']]\n\n                if category not in include_categories:\n                    continue\n\n                if category not in stats[image_location]:\n                    stats[image_location][category] = 0\n                else:\n                    stats[image_location][category] += 1\n\n                loc_cat_folder = os.path.join(loc_folder, category + '/')\n\n                if not os.path.exists(loc_cat_folder):\n                    os.mkdir(loc_cat_folder)\n\n                dst_path = os.path.join(loc_cat_folder, image_fname)\n                src_path = os.path.join(images_folder, image_fname)\n\n                shutil.copyfile(src_path, dst_path)\n\n    shutil.rmtree(images_folder)\n    shutil.rmtree(annotations_folder)\n\n\n\n# SVIRO #################################################################\n\ndef download_sviro(data_dir):\n    # Original URL: https://sviro.kl.dfki.de\n    full_path = stage_path(data_dir, \"sviro\")\n\n    download_and_extract(\"https://sviro.kl.dfki.de/?wpdmdl=1731\",\n                         os.path.join(data_dir, \"sviro_grayscale_rectangle_classification.zip\"))\n\n    os.rename(os.path.join(data_dir, \"SVIRO_DOMAINBED\"),\n              full_path)\n\n\n# SPAWRIOUS #############################################################\n\ndef download_spawrious(data_dir, remove=True):\n    dst = os.path.join(data_dir, \"spawrious.tar.gz\")\n    urllib.request.urlretrieve('https://www.dropbox.com/s/e40j553480h3f3s/spawrious224.tar.gz?dl=1', dst)\n    tar = tarfile.open(dst, \"r:gz\")\n    tar.extractall(os.path.dirname(dst))\n    tar.close()\n    if remove:\n        os.remove(dst)\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(description='Download datasets')\n    parser.add_argument('--data_dir', type=str, default='/home/cola/transopt_files/data/')\n    args = parser.parse_args()\n\n    # download_mnist(args.data_dir)\n    # download_pacs(args.data_dir)\n    # download_office_home(args.data_dir)\n    # download_domain_net(args.data_dir)\n    # download_vlcs(args.data_dir)\n    # download_terra_incognita(args.data_dir)\n    # download_spawrious(args.data_dir)\n    # download_sviro(args.data_dir)\n    # Camelyon17Dataset(root_dir=args.data_dir, download=True)\n    # FMoWDataset(root_dir=args.data_dir, download=True)\n"
  },
  {
    "path": "transopt/benchmark/HPOOOD/fast_data_loader.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nimport torch\n\nclass _InfiniteSampler(torch.utils.data.Sampler):\n    \"\"\"Wraps another Sampler to yield an infinite stream.\"\"\"\n    def __init__(self, sampler):\n        self.sampler = sampler\n\n    def __iter__(self):\n        while True:\n            for batch in self.sampler:\n                yield batch\n\nclass InfiniteDataLoader:\n    def __init__(self, dataset, weights, batch_size, num_workers):\n        super().__init__()\n\n        if weights is not None:\n            sampler = torch.utils.data.WeightedRandomSampler(weights,\n                replacement=True,\n                num_samples=batch_size)\n        else:\n            sampler = torch.utils.data.RandomSampler(dataset,\n                replacement=True)\n\n        if weights == None:\n            weights = torch.ones(len(dataset))\n\n        batch_sampler = torch.utils.data.BatchSampler(\n            sampler,\n            batch_size=batch_size,\n            drop_last=True)\n\n        self._infinite_iterator = iter(torch.utils.data.DataLoader(\n            dataset,\n            num_workers=num_workers,\n            batch_sampler=_InfiniteSampler(batch_sampler)\n        ))\n\n    def __iter__(self):\n        while True:\n            yield next(self._infinite_iterator)\n\n    def __len__(self):\n        raise ValueError\n\nclass FastDataLoader:\n    \"\"\"DataLoader wrapper with slightly improved speed by not respawning worker\n    processes at every epoch.\"\"\"\n    def __init__(self, dataset, batch_size, num_workers):\n        super().__init__()\n\n        batch_sampler = torch.utils.data.BatchSampler(\n            torch.utils.data.RandomSampler(dataset, replacement=False),\n            batch_size=batch_size,\n            drop_last=False\n        )\n\n        self._infinite_iterator = iter(torch.utils.data.DataLoader(\n            dataset,\n            num_workers=num_workers,\n            batch_sampler=_InfiniteSampler(batch_sampler)\n        ))\n\n        self._length = len(batch_sampler)\n\n    def __iter__(self):\n        for _ in range(len(self)):\n            yield next(self._infinite_iterator)\n\n    def __len__(self):\n        return self._length\n"
  },
  {
    "path": "transopt/benchmark/HPOOOD/hparams_registry.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport numpy as np\n\n\ndef _define_hparam(hparams, hparam_name, default_val, random_val_fn):\n    hparams[hparam_name] = (hparams, hparam_name, default_val, random_val_fn)\n\n\ndef _hparams(algorithm, dataset, random_seed):\n    \"\"\"\n    Global registry of hyperparams. Each entry is a (default, random) tuple.\n    New algorithms / networks / etc. should add entries here.\n    \"\"\"\n    SMALL_IMAGES = ['Debug28', 'RotatedMNIST', 'ColoredMNIST']\n\n    hparams = {}\n\n    def _hparam(name, default_val, random_val_fn):\n        \"\"\"Define a hyperparameter. random_val_fn takes a RandomState and\n        returns a random hyperparameter value.\"\"\"\n        assert(name not in hparams)\n        random_state = np.random.RandomState(random_seed)\n        hparams[name] = (default_val, random_val_fn(random_state))\n\n    # Unconditional hparam definitions.\n\n    _hparam('data_augmentation', True, lambda r: True)\n    _hparam('resnet18', False, lambda r: False)\n    _hparam('resnet_dropout', 0., lambda r: r.choice([0., 0.1, 0.5]))\n    _hparam('class_balanced', False, lambda r: False)\n    # TODO: nonlinear classifiers disabled\n    _hparam('nonlinear_classifier', False,\n            lambda r: bool(r.choice([False, False])))\n\n    # Algorithm-specific hparam definitions. Each block of code below\n    # corresponds to exactly one algorithm.\n\n    if algorithm in ['DANN', 'CDANN']:\n        _hparam('lambda', 1.0, lambda r: 10**r.uniform(-2, 2))\n        _hparam('weight_decay_d', 0., lambda r: 10**r.uniform(-6, -2))\n        _hparam('d_steps_per_g_step', 1, lambda r: int(2**r.uniform(0, 3)))\n        _hparam('grad_penalty', 0., lambda r: 10**r.uniform(-2, 1))\n        _hparam('beta1', 0.5, lambda r: r.choice([0., 0.5]))\n        _hparam('mlp_width', 256, lambda r: int(2 ** r.uniform(6, 10)))\n        _hparam('mlp_depth', 3, lambda r: int(r.choice([3, 4, 5])))\n        _hparam('mlp_dropout', 0., lambda r: r.choice([0., 0.1, 0.5]))\n\n    elif algorithm == 'Fish':\n        _hparam('meta_lr', 0.5, lambda r:r.choice([0.05, 0.1, 0.5]))\n\n    elif algorithm == \"RDM\": \n        if dataset in ['DomainNet']: \n            _hparam('rdm_lambda', 0.5, lambda r: r.uniform(0.1, 1.0))\n        elif dataset in ['PACS', 'TerraIncognita']:\n            _hparam('rdm_lambda', 5.0, lambda r: r.uniform(1.0, 10.0))\n        else:\n            _hparam('rdm_lambda', 5.0, lambda r: r.uniform(0.1, 10.0))\n            \n        if dataset == 'DomainNet':\n            _hparam('rdm_penalty_anneal_iters', 2400, lambda r: int(r.uniform(1500, 3000)))\n        else:\n            _hparam('rdm_penalty_anneal_iters', 1500, lambda r: int(r.uniform(800, 2700)))\n            \n        if dataset in ['TerraIncognita', 'OfficeHome', 'DomainNet']:\n            _hparam('variance_weight', 0.0, lambda r: r.choice([0.0]))\n        else:\n            _hparam('variance_weight', 0.004, lambda r: r.uniform(0.001, 0.007))\n            \n        _hparam('rdm_lr', 1.5e-5, lambda r: r.uniform(8e-6, 2e-5))\n\n    elif algorithm == \"RSC\":\n        _hparam('rsc_f_drop_factor', 1/3, lambda r: r.uniform(0, 0.5))\n        _hparam('rsc_b_drop_factor', 1/3, lambda r: r.uniform(0, 0.5))\n\n    elif algorithm == \"SagNet\":\n        _hparam('sag_w_adv', 0.1, lambda r: 10**r.uniform(-2, 1))\n\n    elif algorithm == \"IRM\":\n        _hparam('irm_lambda', 1e2, lambda r: 10**r.uniform(-1, 5))\n        _hparam('irm_penalty_anneal_iters', 500,\n                lambda r: int(10**r.uniform(0, 4)))\n\n    elif algorithm == \"Mixup\":\n        _hparam('mixup_alpha', 0.2, lambda r: 10**r.uniform(-1, 1))\n\n    elif algorithm == \"GroupDRO\":\n        _hparam('groupdro_eta', 1e-2, lambda r: 10**r.uniform(-3, -1))\n\n    elif algorithm == \"MMD\" or algorithm == \"CORAL\" or algorithm == \"CausIRL_CORAL\" or algorithm == \"CausIRL_MMD\":\n        _hparam('mmd_gamma', 1., lambda r: 10**r.uniform(-1, 1))\n\n    elif algorithm == \"MLDG\":\n        _hparam('mldg_beta', 1., lambda r: 10**r.uniform(-1, 1))\n        _hparam('n_meta_test', 2, lambda r:  r.choice([1, 2]))\n\n    elif algorithm == \"MTL\":\n        _hparam('mtl_ema', .99, lambda r: r.choice([0.5, 0.9, 0.99, 1.]))\n\n    elif algorithm == \"VREx\":\n        _hparam('vrex_lambda', 1e1, lambda r: 10**r.uniform(-1, 5))\n        _hparam('vrex_penalty_anneal_iters', 500,\n                lambda r: int(10**r.uniform(0, 4)))\n\n    elif algorithm == \"SD\":\n        _hparam('sd_reg', 0.1, lambda r: 10**r.uniform(-5, -1))\n\n    elif algorithm == \"ANDMask\":\n        _hparam('tau', 1, lambda r: r.uniform(0.5, 1.))\n\n    elif algorithm == \"IGA\":\n        _hparam('penalty', 1000, lambda r: 10**r.uniform(1, 5))\n\n    elif algorithm == \"SANDMask\":\n        _hparam('tau', 1.0, lambda r: r.uniform(0.0, 1.))\n        _hparam('k', 1e+1, lambda r: 10**r.uniform(-3, 5))\n\n    elif algorithm == \"Fishr\":\n        _hparam('lambda', 1000., lambda r: 10**r.uniform(1., 4.))\n        _hparam('penalty_anneal_iters', 1500, lambda r: int(r.uniform(0., 5000.)))\n        _hparam('ema', 0.95, lambda r: r.uniform(0.90, 0.99))\n\n    elif algorithm == \"TRM\":\n        _hparam('cos_lambda', 1e-4, lambda r: 10 ** r.uniform(-5, 0))\n        _hparam('iters', 200, lambda r: int(10 ** r.uniform(0, 4)))\n        _hparam('groupdro_eta', 1e-2, lambda r: 10 ** r.uniform(-3, -1))\n\n    elif algorithm == \"IB_ERM\":\n        _hparam('ib_lambda', 1e2, lambda r: 10**r.uniform(-1, 5))\n        _hparam('ib_penalty_anneal_iters', 500,\n                lambda r: int(10**r.uniform(0, 4)))\n\n    elif algorithm == \"IB_IRM\":\n        _hparam('irm_lambda', 1e2, lambda r: 10**r.uniform(-1, 5))\n        _hparam('irm_penalty_anneal_iters', 500,\n                lambda r: int(10**r.uniform(0, 4)))\n        _hparam('ib_lambda', 1e2, lambda r: 10**r.uniform(-1, 5))\n        _hparam('ib_penalty_anneal_iters', 500,\n                lambda r: int(10**r.uniform(0, 4)))\n\n    elif algorithm == \"CAD\" or algorithm == \"CondCAD\":\n        _hparam('lmbda', 1e-1, lambda r: r.choice([1e-4, 1e-3, 1e-2, 1e-1, 1, 1e1, 1e2]))\n        _hparam('temperature', 0.1, lambda r: r.choice([0.05, 0.1]))\n        _hparam('is_normalized', False, lambda r: False)\n        _hparam('is_project', False, lambda r: False)\n        _hparam('is_flipped', True, lambda r: True)\n        \n    elif algorithm == \"Transfer\":\n        _hparam('t_lambda', 1.0, lambda r: 10**r.uniform(-2, 1))\n        _hparam('delta', 2.0, lambda r: r.uniform(0.1, 3.0))\n        _hparam('d_steps_per_g', 10, lambda r: int(r.choice([1, 2, 5])))\n        _hparam('weight_decay_d', 0., lambda r: 10**r.uniform(-6, -2))\n        _hparam('gda', False, lambda r: True)\n        _hparam('beta1', 0.5, lambda r: r.choice([0., 0.5]))\n        _hparam('lr_d', 1e-3, lambda r: 10**r.uniform(-4.5, -2.5))\n\n    elif algorithm == 'EQRM':\n        _hparam('eqrm_quantile', 0.75, lambda r: r.uniform(0.5, 0.99))\n        _hparam('eqrm_burnin_iters', 2500, lambda r: 10 ** r.uniform(2.5, 3.5))\n        _hparam('eqrm_lr', 1e-6, lambda r: 10 ** r.uniform(-7, -5))\n\n    if algorithm == \"ADRMX\":\n        _hparam('cnt_lambda', 1.0, lambda r: r.choice([1.0]))\n        _hparam('dclf_lambda', 1.0, lambda r: r.choice([1.0]))\n        _hparam('disc_lambda', 0.75, lambda r: r.choice([0.75]))\n        _hparam('rmxd_lambda', 1.0, lambda r: r.choice([1.0]))\n        _hparam('d_steps_per_g_step', 2, lambda r: r.choice([2]))\n        _hparam('beta1', 0.5, lambda r: r.choice([0.5]))\n        _hparam('mlp_width', 256, lambda r: r.choice([256]))\n        _hparam('mlp_depth', 9, lambda r: int(r.choice([8, 9, 10])))\n        _hparam('mlp_dropout', 0., lambda r: r.choice([0]))\n\n\n    # Dataset-and-algorithm-specific hparam definitions. Each block of code\n    # below corresponds to exactly one hparam. Avoid nested conditionals.\n\n    if dataset in SMALL_IMAGES:\n        if algorithm == \"ADRMX\":\n            _hparam('lr', 3e-3, lambda r: r.choice([5e-4, 1e-3, 2e-3, 3e-3]))\n        else:\n            _hparam('lr', 1e-3, lambda r: 10**r.uniform(-4.5, -2.5))\n    else:\n        if algorithm == \"ADRMX\":\n            _hparam('lr', 3e-5, lambda r: r.choice([2e-5, 3e-5, 4e-5, 5e-5]))\n        else:\n            _hparam('lr', 5e-5, lambda r: 10**r.uniform(-5, -3.5))\n\n    if dataset in SMALL_IMAGES:\n        _hparam('weight_decay', 0., lambda r: 0.)\n    else:\n        _hparam('weight_decay', 0., lambda r: 10**r.uniform(-6, -2))\n\n    if dataset in SMALL_IMAGES:\n        _hparam('batch_size', 64, lambda r: int(2**r.uniform(3, 9)))\n    elif algorithm == 'ARM':\n        _hparam('batch_size', 8, lambda r: 8)\n    elif algorithm == 'RDM':\n        if dataset in ['DomainNet', 'TerraIncognita']:\n            _hparam('batch_size', 40, lambda r: int(r.uniform(30, 60)))\n        else:\n            _hparam('batch_size', 88, lambda r: int(r.uniform(70, 100)))\n    elif dataset == 'DomainNet':\n        _hparam('batch_size', 32, lambda r: int(2**r.uniform(3, 5)))\n    else:\n        _hparam('batch_size', 32, lambda r: int(2**r.uniform(3, 5.5)))\n\n    if algorithm in ['DANN', 'CDANN'] and dataset in SMALL_IMAGES:\n        _hparam('lr_g', 1e-3, lambda r: 10**r.uniform(-4.5, -2.5))\n    elif algorithm in ['DANN', 'CDANN']:\n        _hparam('lr_g', 5e-5, lambda r: 10**r.uniform(-5, -3.5))\n\n    if algorithm in ['DANN', 'CDANN'] and dataset in SMALL_IMAGES:\n        _hparam('lr_d', 1e-3, lambda r: 10**r.uniform(-4.5, -2.5))\n    elif algorithm in ['DANN', 'CDANN']:\n        _hparam('lr_d', 5e-5, lambda r: 10**r.uniform(-5, -3.5))\n\n    if algorithm in ['DANN', 'CDANN'] and dataset in SMALL_IMAGES:\n        _hparam('weight_decay_g', 0., lambda r: 0.)\n    elif algorithm in ['DANN', 'CDANN']:\n        _hparam('weight_decay_g', 0., lambda r: 10**r.uniform(-6, -2))\n\n    return hparams\n\n\ndef default_hparams(algorithm, dataset):\n    return {a: b for a, (b, c) in _hparams(algorithm, dataset, 0).items()}\n\n\ndef random_hparams(algorithm, dataset, seed):\n    return {a: c for a, (b, c) in _hparams(algorithm, dataset, seed).items()}\n\ndef get_hparams(algorithm, dataset):\n    hp =  _hparams(algorithm, dataset,0)\n    pass\n\n\n"
  },
  {
    "path": "transopt/benchmark/HPOOOD/hpoood.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport numpy as np\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torchvision\nimport os\nimport random\nimport collections\nimport time\nimport json\nimport shutil\nimport hashlib\nimport copy\n\n\nfrom torchvision import datasets, transforms\n\nfrom typing import Dict, Union\n\n\nfrom transopt.agent.registry import problem_registry\nfrom transopt.benchmark.problem_base.non_tab_problem import NonTabularProblem\nfrom transopt.optimizer.sampler.random import RandomSampler\nfrom transopt.space.fidelity_space import FidelitySpace\nfrom transopt.space.search_space import SearchSpace\nfrom transopt.space.variable import *\nfrom transopt.benchmark.HPOOOD.hparams_registry import random_hparams, default_hparams, get_hparams\nfrom transopt.benchmark.HPOOOD import ooddatasets\nfrom transopt.benchmark.HPOOOD import misc\nfrom transopt.benchmark.HPOOOD import algorithms\nfrom transopt.benchmark.HPOOOD.fast_data_loader import InfiniteDataLoader, FastDataLoader\n\n\ndef make_record(step, hparams_seed, envs):\n    \"\"\"envs is a list of (in_acc, out_acc, is_test_env) tuples\"\"\"\n    result = {\n        'args': {'test_envs': [], 'hparams_seed': hparams_seed},\n        'step': step\n    }\n    for i, (in_acc, out_acc, is_test_env) in enumerate(envs):\n        if is_test_env:\n            result['args']['test_envs'].append(i)\n        result[f'env{i}_in_acc'] = in_acc\n        result[f'env{i}_out_acc'] = out_acc\n    return result\n        \n        \n\nclass HPOOOD_base(NonTabularProblem):\n    DATASETS = [\n    # Small images\n    \"ColoredMNIST\",\n    \"RotatedMNIST\",\n    # Big images\n    \"VLCS\",\n    \"PACS\",\n    \"OfficeHome\",\n    \"TerraIncognita\",\n    \"DomainNet\",\n    \"SVIRO\",\n    # WILDS datasets\n    \"WILDSCamelyon\",\n    \"WILDSFMoW\",\n    # Spawrious datasets\n    \"SpawriousO2O_easy\",\n    \"SpawriousO2O_medium\",\n    \"SpawriousO2O_hard\",\n    \"SpawriousM2M_easy\",\n    \"SpawriousM2M_medium\",\n    \"SpawriousM2M_hard\",\n    ]\n\n    problem_type = 'hpoood'\n    num_variables = 10\n    num_objectives = 1\n    workloads = []\n    fidelity = None\n    \n    ALGORITHMS = [\n        'ERM',\n        'Fish',\n        'IRM',\n        'GroupDRO',\n        'Mixup',\n        'MLDG',\n        'CORAL',\n        'MMD',\n        'DANN',\n        'CDANN',\n        'MTL',\n        'SagNet',\n        'ARM',\n        'VREx',\n        'RSC',\n        'SD',\n        'ANDMask',\n        'SANDMask',\n        'IGA',\n        'SelfReg',\n        \"Fishr\",\n        'TRM',\n        'IB_ERM',\n        'IB_IRM',\n        'CAD',\n        'CondCAD',\n        'Transfer',\n        'CausIRL_CORAL',\n        'CausIRL_MMD',\n        'EQRM',\n        'RDM',\n        'ADRMX',\n    ]\n\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, algorithm\n        ):\n        self.dataset_name = HPOOOD_base.DATASETS[workload]\n        self.algorithm_name = algorithm\n        self.test_envs = [0,1]\n        self.data_dir = '/home/cola/transopt_files/data/'\n        self.output_dir = f'/home/cola/transopt_files/output/'\n        self.holdout_fraction = 0.2\n        self.validate_fraction = 0.1\n        self.uda_holdout_fraction = 0.8\n        self.task = 'domain_generalization'\n        self.steps = 500\n        self.checkpoint_freq = 50\n        self.query = 0\n        \n        self.save_model_every_checkpoint = False\n        \n        self.skip_model_save = False\n        \n        self.trial_seed = seed\n        \n        self.model_save_dir = self.output_dir + f'models/{self.algorithm_name}_{self.dataset_name}_{seed}/'\n        self.results_save_dir = self.output_dir + f'results/{self.algorithm_name}_{self.dataset_name}_{seed}/'\n        \n        print(f\"Selected algorithm: {self.algorithm_name}, dataset: {self.dataset_name}\")\n        \n        os.makedirs(self.model_save_dir, exist_ok=True)\n        os.makedirs(self.results_save_dir, exist_ok=True)\n        super(HPOOOD_base, self).__init__(\n            task_name=task_name,\n            budget=budget,\n            budget_type=budget_type,\n            seed=seed,\n            workload=workload,\n        )\n\n        random.seed(seed)\n        np.random.seed(seed)\n        torch.manual_seed(seed)\n\n        self.hparams = default_hparams(self.algorithm_name, self.dataset_name)\n\n        if self.dataset_name in vars(ooddatasets):\n            self.dataset = vars(ooddatasets)[self.dataset_name](self.data_dir,\n                self.test_envs, self.hparams)\n        else:\n            raise NotImplementedError\n        \n        in_splits = []\n        val_splits = []\n        out_splits = []\n        uda_splits = []\n        \n        for env_i, env in enumerate(self.dataset):\n            uda = []\n\n            out, in_ = misc.split_dataset(env,\n                int(len(env)*self.holdout_fraction),\n                misc.seed_hash(self.seed, env_i))\n            \n            val, in_ = misc.split_dataset(in_,\n                int(len(in_)*self.validate_fraction),\n                misc.seed_hash(self.seed, env_i))\n\n            if env_i in self.test_envs:\n                uda, in_ = misc.split_dataset(in_,\n                    int(len(in_)*self.uda_holdout_fraction),\n                    misc.seed_hash(self.trial_seed, env_i))\n\n            if self.hparams['class_balanced']:\n                in_weights = misc.make_weights_for_balanced_classes(in_)\n                val_weights = misc.make_weights_for_balanced_classes(val)\n                out_weights = misc.make_weights_for_balanced_classes(out)\n                if uda is not None:\n                    uda_weights = misc.make_weights_for_balanced_classes(uda)\n            else:\n                in_weights, val_weights, out_weights, uda_weights = None, None, None, None\n            in_splits.append((in_, in_weights))\n            val_splits.append((val, val_weights))\n            out_splits.append((out, out_weights))\n            if len(uda):\n                uda_splits.append((uda, uda_weights))\n            if self.task == \"domain_adaptation\" and len(uda_splits) == 0:\n                raise ValueError(\"Not enough unlabeled samples for domain adaptation.\")\n\n        self.train_loaders = [InfiniteDataLoader(\n            dataset=env,\n            weights=env_weights,\n            batch_size=self.hparams['batch_size'],\n            num_workers=self.dataset.N_WORKERS)\n            for i, (env, env_weights) in enumerate(in_splits) \n            if i not in self.test_envs]\n        \n        self.val_loaders = [InfiniteDataLoader(\n            dataset=env,\n            weights=env_weights,\n            batch_size=self.hparams['batch_size'],\n            num_workers=self.dataset.N_WORKERS)\n            for i, (env, env_weights) in enumerate(val_splits)\n            if i not in self.test_envs]\n\n        self.uda_loaders = [InfiniteDataLoader(\n            dataset=env,\n            weights=env_weights,\n            batch_size=self.hparams['batch_size'],\n            num_workers=self.dataset.N_WORKERS)\n            for i, (env, env_weights) in enumerate(uda_splits)]\n\n        self.eval_loaders = [FastDataLoader(\n            dataset=env,\n            batch_size=64,\n            num_workers=self.dataset.N_WORKERS)\n            for env, _ in (in_splits + val_splits + out_splits + uda_splits)]\n    \n     \n        self.eval_weights = [None for _, weights in (in_splits + val_splits + out_splits + uda_splits)]\n        self.eval_loader_names = ['env{}_in'.format(i)\n            for i in range(len(in_splits))]\n        self.eval_loader_names += ['env{}_val'.format(i)\n            for i in range(len(val_splits))]\n        self.eval_loader_names += ['env{}_out'.format(i)\n            for i in range(len(out_splits))]\n        self.eval_loader_names += ['env{}_uda'.format(i)\n            for i in range(len(uda_splits))]\n        \n        self.train_minibatches_iterator = zip(*self.train_loaders)\n        self.uda_minibatches_iterator = zip(*self.uda_loaders)\n        self.checkpoint_vals = collections.defaultdict(lambda: [])\n\n        self.steps_per_epoch = min([len(env)/self.hparams['batch_size'] for env,_ in in_splits])\n        \n        if torch.cuda.is_available():\n            self.device = torch.device(f\"cuda:{self.trial_seed}\")\n\n        else:\n            self.device = \"cpu\"\n        \n\n\n\n    def save_checkpoint(self, filename):\n        if self.skip_model_save:\n            return\n        save_dict = {\n            \"model_input_shape\": self.dataset.input_shape,\n            \"model_num_classes\": self.dataset.num_classes,\n            \"model_num_domains\": len(self.dataset) - len(self.test_envs),\n            \"model_hparams\": self.hparams,\n            \"model_dict\": self.algorithm.state_dict()\n        }\n        torch.save(save_dict, os.path.join(self.model_save_dir, filename))\n\n    def get_configuration_space(\n        self, seed: Union[int, None] = None):\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all parameters for\n        the XGBoost Model\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n        variables=[Continuous('lr', [-8.0, 0.0]),\n            Continuous('weight_decay', [-10.0, -5.0]),\n            ]\n        ss = SearchSpace(variables)\n        self.hparam = ss\n        return ss\n\n    def get_fidelity_space(\n        self, seed: Union[int, None] = None):\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all fidelity parameters for\n        the XGBoost Benchmark\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n\n        # return fidel_space\n        fs = FidelitySpace([])\n        return fs\n    def train(self, configuration: dict):\n        \n        torch.backends.cudnn.deterministic = True\n        torch.backends.cudnn.benchmark = False\n        \n        self.hparams = default_hparams(self.algorithm_name, self.dataset_name)\n        self.hparams['lr'] = configuration[\"lr\"]\n        self.hparams['weight_decay'] = configuration[\"weight_decay\"]\n        self.steps = configuration['epoch']\n        print(self.steps)\n\n        n_steps = self.steps or self.dataset.N_STEPS\n        \n        last_results_keys = None\n        \n        start_step = 0\n        \n        for step in range(start_step, n_steps):\n            step_start_time = time.time()\n            minibatches_device = [(x.to(self.device), y.to(self.device))\n                for x,y in next(self.train_minibatches_iterator)]\n            if self.task == \"domain_adaptation\":\n                uda_device = [x.to(self.device)\n                    for x,_ in next(self.uda_minibatches_iterator)]\n            else:\n                uda_device = None\n            step_vals = self.algorithm.update(minibatches_device, uda_device)\n            self.checkpoint_vals['step_time'].append(time.time() - step_start_time)\n\n            for key, val in step_vals.items():\n                self.checkpoint_vals[key].append(val)\n\n            if (step % self.checkpoint_freq == 0) or (step == n_steps - 1):\n                results = {\n                    'step': step,\n                    'epoch': step / self.steps_per_epoch,\n                }\n\n                for key, val in self.checkpoint_vals.items():\n                    results[key] = np.mean(val)\n\n                evals = zip(self.eval_loader_names, self.eval_loaders, self.eval_weights)\n                for name, loader, weights in evals:\n                    acc = misc.accuracy(self.algorithm, loader, weights, self.device)\n                    results[name+'_acc'] = acc\n\n                results['mem_gb'] = torch.cuda.max_memory_allocated() / (1024.*1024.*1024.)\n\n                results_keys = sorted(results.keys())\n                if results_keys != last_results_keys:\n                    misc.print_row(results_keys, colwidth=12)\n                    last_results_keys = results_keys\n                misc.print_row([results[key] for key in results_keys],\n                    colwidth=12)\n\n                results.update({\n                    'hparams': self.hparams,\n                })\n\n                start_step = step + 1\n\n                if self.save_model_every_checkpoint:\n                    self.save_checkpoint(f'model_step{step}.pkl')\n        \n        self.save_checkpoint('model.pkl')\n        with open(os.path.join(self.model_save_dir, 'done'), 'w') as f:\n            f.write('done')\n        \n        return results\n    \n    \n    def get_score(self, configuration: dict):\n        algorithm_class = algorithms.get_algorithm_class(self.algorithm_name)\n        self.algorithm = algorithm_class(self.dataset.input_shape, self.dataset.num_classes,\n            len(self.dataset) - len(self.test_envs), self.hparams)\n        self.algorithm.to(self.device)\n        \n        self.query += 1\n        results = self.train(configuration)\n        \n        epochs_path = os.path.join(self.results_save_dir, f\"{self.query}_lr_{configuration['lr']}_weight_decay_{configuration['weight_decay']}.jsonl\")\n        with open(epochs_path, 'a') as f:\n            f.write(json.dumps(results, sort_keys=True) + \"\\n\")\n\n\n        val_acc = [i[1] for i in results.items() if 'val' in i[0]]\n        avg_val_acc = np.mean(val_acc)\n        \n        test_acc = [i[1] for i in results.items() if 'out' in i[0]]\n        avg_test_acc = np.mean(test_acc)\n        \n        return avg_val_acc, avg_test_acc\n        \n\n    def objective_function(\n        self,\n        configuration,\n        fidelity = None,\n        seed = None,\n        **kwargs\n    ) -> Dict:\n\n            \n        if 'epoch' in kwargs:\n            epoch = kwargs['epoch']\n        else:\n            epoch = 500\n            \n        if fidelity is None:\n            fidelity = {\"epoch\": epoch, \"data_frac\": 0.8}\n        c = {\n            \"lr\": np.exp2(configuration[\"lr\"]),\n            \"weight_decay\": np.exp2(configuration[\"weight_decay\"]),\n            \"batch_size\": 64,\n            \"epoch\": fidelity[\"epoch\"],\n        }\n        val_acc, test_acc = self.get_score(c)\n\n        results = {list(self.objective_info.keys())[0]: float(1 - val_acc)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n    \n    def get_objectives(self) -> Dict:\n        return {'function_value': 'minimize'}\n    \n    def get_problem_type(self):\n        return \"hpo\"\n    \n    \n    \n@problem_registry.register(\"ERMOOD\")\nclass ERMOOD(HPOOOD_base):    \n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n        ):\n        super(ERMOOD, self).__init__(task_name=task_name, budget_type=budget_type, budget=budget, seed = seed, workload = workload, algorithm='ERM')\n\n@problem_registry.register(\"IRMOOD\")\nclass IRMOOD(HPOOOD_base):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n        ):\n        super(IRMOOD, self).__init__(task_name=task_name, budget_type=budget_type, budget=budget, seed = seed, workload = workload, algorithm='IRM')\n\n@problem_registry.register(\"ARMOOD\")\nclass ARMOOD(HPOOOD_base):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n        ):\n        super(ARMOOD, self).__init__(task_name=task_name, budget_type=budget_type, budget=budget, seed = seed, workload = workload, algorithm='ARM')\n\n@problem_registry.register(\"MixupOOD\")\nclass MixupOOD(HPOOOD_base):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n        ):\n        super(MixupOOD, self).__init__(task_name=task_name, budget_type=budget_type, budget=budget, seed = seed, workload = workload, algorithm='Mixup')\n\n@problem_registry.register(\"DANNOOD\")\nclass DANNOOD(HPOOOD_base):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n        ):\n        super(DANNOOD, self).__init__(task_name=task_name, budget_type=budget_type, budget=budget, seed = seed, workload = workload, algorithm='DANN')\n        \n\n\n\n\nif __name__ == \"__main__\":\n    p = MixupOOD(task_name='', budget_type='FEs', budget=100, seed = 0, workload = 2)\n    configuration = {\n        \"lr\": -0.3,\n        \"weight_decay\": -5,\n    }\n    p.f(configuration=configuration)\n    \n\n"
  },
  {
    "path": "transopt/benchmark/HPOOOD/misc.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\n\"\"\"\nThings that don't belong anywhere else\n\"\"\"\n\nimport math\nimport hashlib\nimport sys\nfrom collections import OrderedDict\nfrom numbers import Number\nimport operator\n\nimport numpy as np\nimport torch\nfrom collections import Counter\nfrom itertools import cycle\n\n\ndef distance(h1, h2):\n    ''' distance of two networks (h1, h2 are classifiers)'''\n    dist = 0.\n    for param in h1.state_dict():\n        h1_param, h2_param = h1.state_dict()[param], h2.state_dict()[param]\n        dist += torch.norm(h1_param - h2_param) ** 2  # use Frobenius norms for matrices\n    return torch.sqrt(dist)\n\ndef proj(delta, adv_h, h):\n    ''' return proj_{B(h, \\delta)}(adv_h), Euclidean projection to Euclidean ball'''\n    ''' adv_h and h are two classifiers'''\n    dist = distance(adv_h, h)\n    if dist <= delta:\n        return adv_h\n    else:\n        ratio = delta / dist\n        for param_h, param_adv_h in zip(h.parameters(), adv_h.parameters()):\n            param_adv_h.data = param_h + ratio * (param_adv_h - param_h)\n        # print(\"distance: \", distance(adv_h, h))\n        return adv_h\n\ndef l2_between_dicts(dict_1, dict_2):\n    assert len(dict_1) == len(dict_2)\n    dict_1_values = [dict_1[key] for key in sorted(dict_1.keys())]\n    dict_2_values = [dict_2[key] for key in sorted(dict_1.keys())]\n    return (\n        torch.cat(tuple([t.view(-1) for t in dict_1_values])) -\n        torch.cat(tuple([t.view(-1) for t in dict_2_values]))\n    ).pow(2).mean()\n\nclass MovingAverage:\n\n    def __init__(self, ema, oneminusema_correction=True):\n        self.ema = ema\n        self.ema_data = {}\n        self._updates = 0\n        self._oneminusema_correction = oneminusema_correction\n\n    def update(self, dict_data):\n        ema_dict_data = {}\n        for name, data in dict_data.items():\n            data = data.view(1, -1)\n            if self._updates == 0:\n                previous_data = torch.zeros_like(data)\n            else:\n                previous_data = self.ema_data[name]\n\n            ema_data = self.ema * previous_data + (1 - self.ema) * data\n            if self._oneminusema_correction:\n                # correction by 1/(1 - self.ema)\n                # so that the gradients amplitude backpropagated in data is independent of self.ema\n                ema_dict_data[name] = ema_data / (1 - self.ema)\n            else:\n                ema_dict_data[name] = ema_data\n            self.ema_data[name] = ema_data.clone().detach()\n\n        self._updates += 1\n        return ema_dict_data\n\n\n\ndef make_weights_for_balanced_classes(dataset):\n    counts = Counter()\n    classes = []\n    for _, y in dataset:\n        y = int(y)\n        counts[y] += 1\n        classes.append(y)\n\n    n_classes = len(counts)\n\n    weight_per_class = {}\n    for y in counts:\n        weight_per_class[y] = 1 / (counts[y] * n_classes)\n\n    weights = torch.zeros(len(dataset))\n    for i, y in enumerate(classes):\n        weights[i] = weight_per_class[int(y)]\n\n    return weights\n\ndef pdb():\n    sys.stdout = sys.__stdout__\n    import pdb\n    print(\"Launching PDB, enter 'n' to step to parent function.\")\n    pdb.set_trace()\n\ndef seed_hash(*args):\n    \"\"\"\n    Derive an integer hash from all args, for use as a random seed.\n    \"\"\"\n    args_str = str(args)\n    return int(hashlib.md5(args_str.encode(\"utf-8\")).hexdigest(), 16) % (2**31)\n\ndef print_separator():\n    print(\"=\"*80)\n\ndef print_row(row, colwidth=10, latex=False):\n    if latex:\n        sep = \" & \"\n        end_ = \"\\\\\\\\\"\n    else:\n        sep = \"  \"\n        end_ = \"\"\n\n    def format_val(x):\n        if np.issubdtype(type(x), np.floating):\n            x = \"{:.10f}\".format(x)\n        return str(x).ljust(colwidth)[:colwidth]\n    print(sep.join([format_val(x) for x in row]), end_)\n\nclass _SplitDataset(torch.utils.data.Dataset):\n    \"\"\"Used by split_dataset\"\"\"\n    def __init__(self, underlying_dataset, keys):\n        super(_SplitDataset, self).__init__()\n        self.underlying_dataset = underlying_dataset\n        self.keys = keys\n    def __getitem__(self, key):\n        return self.underlying_dataset[self.keys[key]]\n    def __len__(self):\n        return len(self.keys)\n\ndef split_dataset(dataset, n, seed=0):\n    \"\"\"\n    Return a pair of datasets corresponding to a random split of the given\n    dataset, with n datapoints in the first dataset and the rest in the last,\n    using the given random seed\n    \"\"\"\n    assert(n <= len(dataset))\n    keys = list(range(len(dataset)))\n    np.random.RandomState(seed).shuffle(keys)\n    keys_1 = keys[:n]\n    keys_2 = keys[n:]\n    return _SplitDataset(dataset, keys_1), _SplitDataset(dataset, keys_2)\n\n\ndef random_pairs_of_minibatches(minibatches):\n    perm = torch.randperm(len(minibatches)).tolist()\n    pairs = []\n\n    for i in range(len(minibatches)):\n        j = i + 1 if i < (len(minibatches) - 1) else 0\n\n        xi, yi = minibatches[perm[i]][0], minibatches[perm[i]][1]\n        xj, yj = minibatches[perm[j]][0], minibatches[perm[j]][1]\n\n        min_n = min(len(xi), len(xj))\n\n        pairs.append(((xi[:min_n], yi[:min_n]), (xj[:min_n], yj[:min_n])))\n\n    return pairs\n\ndef split_meta_train_test(minibatches, num_meta_test=1):\n    n_domains = len(minibatches)\n    perm = torch.randperm(n_domains).tolist()\n    pairs = []\n    meta_train = perm[:(n_domains-num_meta_test)]\n    meta_test = perm[-num_meta_test:]\n\n    for i,j in zip(meta_train, cycle(meta_test)):\n         xi, yi = minibatches[i][0], minibatches[i][1]\n         xj, yj = minibatches[j][0], minibatches[j][1]\n\n         min_n = min(len(xi), len(xj))\n         pairs.append(((xi[:min_n], yi[:min_n]), (xj[:min_n], yj[:min_n])))\n\n    return pairs\n\ndef accuracy(network, loader, weights, device):\n    correct = 0\n    total = 0\n    weights_offset = 0\n\n    network.eval()\n    with torch.no_grad():\n        for x, y in loader:\n            x = x.to(device)\n            y = y.to(device)\n            p = network.predict(x)\n            if weights is None:\n                batch_weights = torch.ones(len(x))\n            else:\n                batch_weights = weights[weights_offset : weights_offset + len(x)]\n                weights_offset += len(x)\n            batch_weights = batch_weights.to(device)\n            if p.size(1) == 1:\n                correct += (p.gt(0).eq(y).float() * batch_weights.view(-1, 1)).sum().item()\n            else:\n                correct += (p.argmax(1).eq(y).float() * batch_weights).sum().item()\n            total += batch_weights.sum().item()\n    network.train()\n\n    return correct / total\n\nclass Tee:\n    def __init__(self, fname, mode=\"a\"):\n        self.stdout = sys.stdout\n        self.file = open(fname, mode)\n\n    def write(self, message):\n        self.stdout.write(message)\n        self.file.write(message)\n        self.flush()\n\n    def flush(self):\n        self.stdout.flush()\n        self.file.flush()\n\nclass ParamDict(OrderedDict):\n    \"\"\"Code adapted from https://github.com/Alok/rl_implementations/tree/master/reptile.\n    A dictionary where the values are Tensors, meant to represent weights of\n    a model. This subclass lets you perform arithmetic on weights directly.\"\"\"\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, *kwargs)\n\n    def _prototype(self, other, op):\n        if isinstance(other, Number):\n            return ParamDict({k: op(v, other) for k, v in self.items()})\n        elif isinstance(other, dict):\n            return ParamDict({k: op(self[k], other[k]) for k in self})\n        else:\n            raise NotImplementedError\n\n    def __add__(self, other):\n        return self._prototype(other, operator.add)\n\n    def __rmul__(self, other):\n        return self._prototype(other, operator.mul)\n\n    __mul__ = __rmul__\n\n    def __neg__(self):\n        return ParamDict({k: -v for k, v in self.items()})\n\n    def __rsub__(self, other):\n        # a- b := a + (-b)\n        return self.__add__(other.__neg__())\n\n    __sub__ = __rsub__\n\n    def __truediv__(self, other):\n        return self._prototype(other, operator.truediv)\n\n\n############################################################\n# A general PyTorch implementation of KDE. Builds on:\n# https://github.com/EugenHotaj/pytorch-generative/blob/master/pytorch_generative/models/kde.py\n############################################################\n\nclass Kernel(torch.nn.Module):\n    \"\"\"Base class which defines the interface for all kernels.\"\"\"\n\n    def __init__(self, bw=None):\n        super().__init__()\n        self.bw = 0.05 if bw is None else bw\n\n    def _diffs(self, test_Xs, train_Xs):\n        \"\"\"Computes difference between each x in test_Xs with all train_Xs.\"\"\"\n        test_Xs = test_Xs.view(test_Xs.shape[0], 1, *test_Xs.shape[1:])\n        train_Xs = train_Xs.view(1, train_Xs.shape[0], *train_Xs.shape[1:])\n        return test_Xs - train_Xs\n\n    def forward(self, test_Xs, train_Xs):\n        \"\"\"Computes p(x) for each x in test_Xs given train_Xs.\"\"\"\n\n    def sample(self, train_Xs):\n        \"\"\"Generates samples from the kernel distribution.\"\"\"\n\n\nclass GaussianKernel(Kernel):\n    \"\"\"Implementation of the Gaussian kernel.\"\"\"\n\n    def forward(self, test_Xs, train_Xs):\n        diffs = self._diffs(test_Xs, train_Xs)\n        dims = tuple(range(len(diffs.shape))[2:])\n        if dims == ():\n            x_sq = diffs ** 2\n        else:\n            x_sq = torch.norm(diffs, p=2, dim=dims) ** 2\n\n        var = self.bw ** 2\n        exp = torch.exp(-x_sq / (2 * var))\n        coef = 1. / torch.sqrt(2 * np.pi * var)\n\n        return (coef * exp).mean(dim=1)\n\n    def sample(self, train_Xs):\n        # device = train_Xs.device\n        noise = torch.randn(train_Xs.shape) * self.bw\n        return train_Xs + noise\n\n    def cdf(self, test_Xs, train_Xs):\n        mus = train_Xs                                                      # kernel centred on each observation\n        sigmas = torch.ones(len(mus), device=test_Xs.device) * self.bw      # bandwidth = stddev\n        x_ = test_Xs.repeat(len(mus), 1).T                                  # repeat to allow broadcasting below\n        return torch.mean(torch.distributions.Normal(mus, sigmas).cdf(x_))\n\n\ndef estimate_bandwidth(x, method=\"silverman\"):\n    x_, _ = torch.sort(x)\n    n = len(x_)\n    sample_std = torch.std(x_, unbiased=True)\n\n    if method == 'silverman':\n        # https://en.wikipedia.org/wiki/Kernel_density_estimation#A_rule-of-thumb_bandwidth_estimator\n        iqr = torch.quantile(x_, 0.75) - torch.quantile(x_, 0.25)\n        bandwidth = 0.9 * torch.min(sample_std, iqr / 1.34) * n ** (-0.2)\n\n    elif method.lower() == 'gauss-optimal':\n        bandwidth = 1.06 * sample_std * (n ** -0.2)\n\n    else:\n        raise ValueError(f\"Invalid method selected: {method}.\")\n\n    return bandwidth\n\n\nclass KernelDensityEstimator(torch.nn.Module):\n    \"\"\"The KernelDensityEstimator model.\"\"\"\n\n    def __init__(self, train_Xs, kernel='gaussian', bw_select='Gauss-optimal'):\n        \"\"\"Initializes a new KernelDensityEstimator.\n        Args:\n            train_Xs: The \"training\" data to use when estimating probabilities.\n            kernel: The kernel to place on each of the train_Xs.\n        \"\"\"\n        super().__init__()\n        self.train_Xs = train_Xs\n        self._n_kernels = len(self.train_Xs)\n\n        if bw_select is not None:\n            self.bw = estimate_bandwidth(self.train_Xs, bw_select)\n        else:\n            self.bw = None\n\n        if kernel.lower() == 'gaussian':\n            self.kernel = GaussianKernel(self.bw)\n        else:\n            raise NotImplementedError(f\"'{kernel}' kernel not implemented.\")\n\n    @property\n    def device(self):\n        return self.train_Xs.device\n\n    # TODO(eugenhotaj): This method consumes O(train_Xs * x) memory. Implement an iterative version instead.\n    def forward(self, x):\n        return self.kernel(x, self.train_Xs)\n\n    def sample(self, n_samples):\n        idxs = np.random.choice(range(self._n_kernels), size=n_samples)\n        return self.kernel.sample(self.train_Xs[idxs])\n\n    def cdf(self, x):\n        return self.kernel.cdf(x, self.train_Xs)\n\n\n############################################################\n# PyTorch implementation of 1D distributions.\n############################################################\n\nEPS = 1e-16\n\n\nclass Distribution1D:\n    def __init__(self, dist_function=None):\n        \"\"\"\n        :param dist_function: function to instantiate the distribution (self.dist).\n        :param parameters: list of parameters in the correct order for dist_function.\n        \"\"\"\n        self.dist = None\n        self.dist_function = dist_function\n\n    @property\n    def parameters(self):\n        raise NotImplementedError\n\n    def create_dist(self):\n        if self.dist_function is not None:\n            return self.dist_function(*self.parameters)\n        else:\n            raise NotImplementedError(\"No distribution function was specified during intialization.\")\n\n    def estimate_parameters(self, x):\n        raise NotImplementedError\n\n    def log_prob(self, x):\n        return self.create_dist().log_prob(x)\n\n    def cdf(self, x):\n        return self.create_dist().cdf(x)\n\n    def icdf(self, q):\n        return self.create_dist().icdf(q)\n\n    def sample(self, n=1):\n        if self.dist is None:\n            self.dist = self.create_dist()\n        n_ = torch.Size([]) if n == 1 else (n,)\n        return self.dist.sample(n_)\n\n    def sample_n(self, n=10):\n        return self.sample(n)\n\n\ndef continuous_bisect_fun_left(f, v, lo, hi, n_steps=32):\n    val_range = [lo, hi]\n    k = 0.5 * sum(val_range)\n    for _ in range(n_steps):\n        val_range[int(f(k) > v)] = k\n        next_k = 0.5 * sum(val_range)\n        if next_k == k:\n            break\n        k = next_k\n    return k\n\n\nclass Normal(Distribution1D):\n    def __init__(self, location=0, scale=1):\n        self.location = location\n        self.scale = scale\n        super().__init__(torch.distributions.Normal)\n\n    @property\n    def parameters(self):\n        return [self.location, self.scale]\n\n    def estimate_parameters(self, x):\n        mean = sum(x) / len(x)\n        var = sum([(x_i - mean) ** 2 for x_i in x]) / (len(x) - 1)\n        self.location = mean\n        self.scale = torch.sqrt(var + EPS)\n\n    def icdf(self, q):\n        if q >= 0:\n            return super().icdf(q)\n\n        else:\n            # To get q *very* close to 1 without numerical issues, we:\n            # 1) Use q < 0 to represent log(y), where q = 1 - y.\n            # 2) Use the inverse-normal-cdf approximation here:\n            #    https://math.stackexchange.com/questions/2964944/asymptotics-of-inverse-of-normal-cdf\n            log_y = q\n            return self.location + self.scale * math.sqrt(-2 * log_y)\n\n\nclass Nonparametric(Distribution1D):\n    def __init__(self, use_kde=True, bw_select='Gauss-optimal'):\n        self.use_kde = use_kde\n        self.bw_select = bw_select\n        self.bw, self.data, self.kde = None, None, None\n        super().__init__()\n\n    @property\n    def parameters(self):\n        return []\n\n    def estimate_parameters(self, x):\n        self.data, _ = torch.sort(x)\n\n        if self.use_kde:\n            self.kde = KernelDensityEstimator(self.data, bw_select=self.bw_select)\n            self.bw = torch.ones(1, device=self.data.device) * self.kde.bw\n\n    def icdf(self, q):\n        if not self.use_kde:\n            # Empirical or step CDF. Differentiable as torch.quantile uses (linear) interpolation.\n            return torch.quantile(self.data, float(q))\n\n        if q >= 0:\n            # Find quantile via binary search on the KDE CDF\n            lo = torch.distributions.Normal(self.data[0], self.bw[0]).icdf(q)\n            hi = torch.distributions.Normal(self.data[-1], self.bw[-1]).icdf(q)\n            return continuous_bisect_fun_left(self.kde.cdf, q, lo, hi)\n\n        else:\n            # To get q *very* close to 1 without numerical issues, we:\n            # 1) Use q < 0 to represent log(y), where q = 1 - y.\n            # 2) Use the inverse-normal-cdf approximation here:\n            #    https://math.stackexchange.com/questions/2964944/asymptotics-of-inverse-of-normal-cdf\n            log_y = q\n            v = torch.mean(self.data + self.bw * math.sqrt(-2 * log_y))\n            return v\n\n\n############################################################\n# Supervised Contrastive Loss implementation from:\n# https://arxiv.org/abs/2004.11362\n############################################################\nclass SupConLossLambda(torch.nn.Module):\n    def __init__(self, lamda: float=0.5, temperature: float=0.07):\n        super(SupConLossLambda, self).__init__()\n        self.temperature = temperature\n        self.lamda = lamda\n\n    def forward(self, features: torch.Tensor, labels: torch.Tensor, domain_labels: torch.Tensor) -> torch.Tensor:\n        batch_size, _ = features.shape\n        normalized_features = torch.nn.functional.normalize(features, p=2, dim=1)\n        # create a lookup table for pairwise dot prods\n        pairwise_dot_prods = torch.matmul(normalized_features, normalized_features.T)/self.temperature\n        loss = 0\n        nans = 0\n        for i, (label, domain_label) in enumerate(zip(labels, domain_labels)):\n            # take the positive and negative samples wrt in/out domain            \n            cond_pos_in_domain = torch.logical_and(labels==label, domain_labels == domain_label) # take all positives\n            cond_pos_in_domain[i] = False # exclude itself\n            cond_pos_out_domain = torch.logical_and(labels==label, domain_labels != domain_label)\n            cond_neg_in_domain = torch.logical_and(labels!=label, domain_labels == domain_label)\n            cond_neg_out_domain = torch.logical_and(labels!=label, domain_labels != domain_label)\n\n            pos_feats_in_domain = pairwise_dot_prods[cond_pos_in_domain]\n            pos_feats_out_domain = pairwise_dot_prods[cond_pos_out_domain]\n            neg_feats_in_domain = pairwise_dot_prods[cond_neg_in_domain]\n            neg_feats_out_domain = pairwise_dot_prods[cond_neg_out_domain]\n            \n\n            # calculate nominator and denominator wrt lambda scaling\n            scaled_exp_term = torch.cat((self.lamda * torch.exp(pos_feats_in_domain[:, i]), (1 - self.lamda) * torch.exp(pos_feats_out_domain[:, i])))\n            scaled_denom_const = torch.sum(torch.cat((self.lamda * torch.exp(neg_feats_in_domain[:, i]), (1 - self.lamda) * torch.exp(neg_feats_out_domain[:, i]), scaled_exp_term))) + 1e-5\n            \n            # nof positive samples\n            num_positives = pos_feats_in_domain.shape[0] + pos_feats_out_domain.shape[0] # total positive samples\n            log_fraction = torch.log((scaled_exp_term / scaled_denom_const) + 1e-5) # take log fraction\n            loss_i = torch.sum(log_fraction) / num_positives\n            if torch.isnan(loss_i):\n                nans += 1\n                continue\n            loss -= loss_i # sum and average over num positives\n        return loss/(batch_size-nans+1) # avg over batch\n"
  },
  {
    "path": "transopt/benchmark/HPOOOD/networks.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision.models\n\nfrom transopt.benchmark.HPOOOD import wide_resnet\nimport copy\n\n\ndef remove_batch_norm_from_resnet(model):\n    fuse = torch.nn.utils.fusion.fuse_conv_bn_eval\n    model.eval()\n\n    model.conv1 = fuse(model.conv1, model.bn1)\n    model.bn1 = Identity()\n\n    for name, module in model.named_modules():\n        if name.startswith(\"layer\") and len(name) == 6:\n            for b, bottleneck in enumerate(module):\n                for name2, module2 in bottleneck.named_modules():\n                    if name2.startswith(\"conv\"):\n                        bn_name = \"bn\" + name2[-1]\n                        setattr(bottleneck, name2,\n                                fuse(module2, getattr(bottleneck, bn_name)))\n                        setattr(bottleneck, bn_name, Identity())\n                if isinstance(bottleneck.downsample, torch.nn.Sequential):\n                    bottleneck.downsample[0] = fuse(bottleneck.downsample[0],\n                                                    bottleneck.downsample[1])\n                    bottleneck.downsample[1] = Identity()\n    model.train()\n    return model\n\n\nclass Identity(nn.Module):\n    \"\"\"An identity layer\"\"\"\n    def __init__(self):\n        super(Identity, self).__init__()\n\n    def forward(self, x):\n        return x\n\n\nclass MLP(nn.Module):\n    \"\"\"Just  an MLP\"\"\"\n    def __init__(self, n_inputs, n_outputs, hparams):\n        super(MLP, self).__init__()\n        self.input = nn.Linear(n_inputs, hparams['mlp_width'])\n        self.dropout = nn.Dropout(hparams['mlp_dropout'])\n        self.hiddens = nn.ModuleList([\n            nn.Linear(hparams['mlp_width'], hparams['mlp_width'])\n            for _ in range(hparams['mlp_depth']-2)])\n        self.output = nn.Linear(hparams['mlp_width'], n_outputs)\n        self.n_outputs = n_outputs\n\n    def forward(self, x):\n        x = self.input(x)\n        x = self.dropout(x)\n        x = F.relu(x)\n        for hidden in self.hiddens:\n            x = hidden(x)\n            x = self.dropout(x)\n            x = F.relu(x)\n        x = self.output(x)\n        return x\n\n\nclass ResNet(torch.nn.Module):\n    \"\"\"ResNet with the softmax chopped off and the batchnorm frozen\"\"\"\n    def __init__(self, input_shape, hparams):\n        super(ResNet, self).__init__()\n        if hparams['resnet18']:\n            self.network = torchvision.models.resnet18(pretrained=True)\n            self.n_outputs = 512\n        else:\n            self.network = torchvision.models.resnet50(pretrained=True)\n            self.n_outputs = 2048\n\n        # self.network = remove_batch_norm_from_resnet(self.network)\n\n        # adapt number of channels\n        nc = input_shape[0]\n        if nc != 3:\n            tmp = self.network.conv1.weight.data.clone()\n\n            self.network.conv1 = nn.Conv2d(\n                nc, 64, kernel_size=(7, 7),\n                stride=(2, 2), padding=(3, 3), bias=False)\n\n            for i in range(nc):\n                self.network.conv1.weight.data[:, i, :, :] = tmp[:, i % 3, :, :]\n\n        # save memory\n        del self.network.fc\n        self.network.fc = Identity()\n\n        self.freeze_bn()\n        self.hparams = hparams\n        self.dropout = nn.Dropout(hparams['resnet_dropout'])\n\n    def forward(self, x):\n        \"\"\"Encode x into a feature vector of size n_outputs.\"\"\"\n        return self.dropout(self.network(x))\n\n    def train(self, mode=True):\n        \"\"\"\n        Override the default train() to freeze the BN parameters\n        \"\"\"\n        super().train(mode)\n        self.freeze_bn()\n\n    def freeze_bn(self):\n        for m in self.network.modules():\n            if isinstance(m, nn.BatchNorm2d):\n                m.eval()\n\n\nclass MNIST_CNN(nn.Module):\n    \"\"\"\n    Hand-tuned architecture for MNIST.\n    Weirdness I've noticed so far with this architecture:\n    - adding a linear layer after the mean-pool in features hurts\n        RotatedMNIST-100 generalization severely.\n    \"\"\"\n    n_outputs = 128\n\n    def __init__(self, input_shape):\n        super(MNIST_CNN, self).__init__()\n        self.conv1 = nn.Conv2d(input_shape[0], 64, 3, 1, padding=1)\n        self.conv2 = nn.Conv2d(64, 128, 3, stride=2, padding=1)\n        self.conv3 = nn.Conv2d(128, 128, 3, 1, padding=1)\n        self.conv4 = nn.Conv2d(128, 128, 3, 1, padding=1)\n\n        self.bn0 = nn.GroupNorm(8, 64)\n        self.bn1 = nn.GroupNorm(8, 128)\n        self.bn2 = nn.GroupNorm(8, 128)\n        self.bn3 = nn.GroupNorm(8, 128)\n\n        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = F.relu(x)\n        x = self.bn0(x)\n\n        x = self.conv2(x)\n        x = F.relu(x)\n        x = self.bn1(x)\n\n        x = self.conv3(x)\n        x = F.relu(x)\n        x = self.bn2(x)\n\n        x = self.conv4(x)\n        x = F.relu(x)\n        x = self.bn3(x)\n\n        x = self.avgpool(x)\n        x = x.view(len(x), -1)\n        return x\n\n\nclass ContextNet(nn.Module):\n    def __init__(self, input_shape):\n        super(ContextNet, self).__init__()\n\n        # Keep same dimensions\n        padding = (5 - 1) // 2\n        self.context_net = nn.Sequential(\n            nn.Conv2d(input_shape[0], 64, 5, padding=padding),\n            nn.BatchNorm2d(64),\n            nn.ReLU(),\n            nn.Conv2d(64, 64, 5, padding=padding),\n            nn.BatchNorm2d(64),\n            nn.ReLU(),\n            nn.Conv2d(64, 1, 5, padding=padding),\n        )\n\n    def forward(self, x):\n        return self.context_net(x)\n\n\ndef Featurizer(input_shape, hparams):\n    \"\"\"Auto-select an appropriate featurizer for the given input shape.\"\"\"\n    if len(input_shape) == 1:\n        return MLP(input_shape[0], hparams[\"mlp_width\"], hparams)\n    elif input_shape[1:3] == (28, 28):\n        return MNIST_CNN(input_shape)\n    elif input_shape[1:3] == (32, 32):\n        return wide_resnet.Wide_ResNet(input_shape, 16, 2, 0.)\n    elif input_shape[1:3] == (224, 224):\n        return ResNet(input_shape, hparams)\n    else:\n        raise NotImplementedError\n\n\ndef Classifier(in_features, out_features, is_nonlinear=False):\n    if is_nonlinear:\n        return torch.nn.Sequential(\n            torch.nn.Linear(in_features, in_features // 2),\n            torch.nn.ReLU(),\n            torch.nn.Linear(in_features // 2, in_features // 4),\n            torch.nn.ReLU(),\n            torch.nn.Linear(in_features // 4, out_features))\n    else:\n        return torch.nn.Linear(in_features, out_features)\n\n\nclass WholeFish(nn.Module):\n    def __init__(self, input_shape, num_classes, hparams, weights=None):\n        super(WholeFish, self).__init__()\n        featurizer = Featurizer(input_shape, hparams)\n        classifier = Classifier(\n            featurizer.n_outputs,\n            num_classes,\n            hparams['nonlinear_classifier'])\n        self.net = nn.Sequential(\n            featurizer, classifier\n        )\n        if weights is not None:\n            self.load_state_dict(copy.deepcopy(weights))\n\n    def reset_weights(self, weights):\n        self.load_state_dict(copy.deepcopy(weights))\n\n    def forward(self, x):\n        return self.net(x)\n"
  },
  {
    "path": "transopt/benchmark/HPOOOD/ooddatasets.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nimport os\nimport torch\nfrom PIL import Image, ImageFile\nfrom torchvision import transforms\nimport torchvision.datasets.folder\nfrom torch.utils.data import TensorDataset, Subset, ConcatDataset, Dataset\nfrom torchvision.datasets import MNIST, ImageFolder\nfrom torchvision.transforms.functional import rotate\n\nfrom wilds.datasets.camelyon17_dataset import Camelyon17Dataset\nfrom wilds.datasets.fmow_dataset import FMoWDataset\n\nImageFile.LOAD_TRUNCATED_IMAGES = True\n\n\n\ndef get_dataset_class(dataset_name):\n    \"\"\"Return the dataset class with the given name.\"\"\"\n    if dataset_name not in globals():\n        raise NotImplementedError(\"Dataset not found: {}\".format(dataset_name))\n    return globals()[dataset_name]\n\n\ndef num_environments(dataset_name):\n    return len(get_dataset_class(dataset_name).ENVIRONMENTS)\n\n\nclass MultipleDomainDataset:\n    N_STEPS = 5001           # Default, subclasses may override\n    CHECKPOINT_FREQ = 100    # Default, subclasses may override\n    N_WORKERS = 1            # Default, subclasses may override\n    ENVIRONMENTS = None      # Subclasses should override\n    INPUT_SHAPE = None       # Subclasses should override\n\n    def __getitem__(self, index):\n        return self.datasets[index]\n\n    def __len__(self):\n        return len(self.datasets)\n\n\nclass Debug(MultipleDomainDataset):\n    def __init__(self, root, test_envs, hparams):\n        super().__init__()\n        self.input_shape = self.INPUT_SHAPE\n        self.num_classes = 2\n        self.datasets = []\n        for _ in [0, 1, 2]:\n            self.datasets.append(\n                TensorDataset(\n                    torch.randn(16, *self.INPUT_SHAPE),\n                    torch.randint(0, self.num_classes, (16,))\n                )\n            )\n\nclass Debug28(Debug):\n    INPUT_SHAPE = (3, 28, 28)\n    ENVIRONMENTS = ['0', '1', '2']\n\nclass Debug224(Debug):\n    INPUT_SHAPE = (3, 224, 224)\n    ENVIRONMENTS = ['0', '1', '2']\n\n\nclass MultipleEnvironmentMNIST(MultipleDomainDataset):\n    def __init__(self, root, environments, dataset_transform, input_shape,\n                 num_classes):\n        super().__init__()\n        if root is None:\n            raise ValueError('Data directory not specified!')\n\n        original_dataset_tr = MNIST(root, train=True, download=True)\n        original_dataset_te = MNIST(root, train=False, download=True)\n\n        original_images = torch.cat((original_dataset_tr.data,\n                                     original_dataset_te.data))\n\n        original_labels = torch.cat((original_dataset_tr.targets,\n                                     original_dataset_te.targets))\n\n        shuffle = torch.randperm(len(original_images))\n\n        original_images = original_images[shuffle]\n        original_labels = original_labels[shuffle]\n\n        self.datasets = []\n\n        for i in range(len(environments)):\n            images = original_images[i::len(environments)]\n            labels = original_labels[i::len(environments)]\n            self.datasets.append(dataset_transform(images, labels, environments[i]))\n\n        self.input_shape = input_shape\n        self.num_classes = num_classes\n\n\nclass ColoredMNIST(MultipleEnvironmentMNIST):\n    ENVIRONMENTS = ['+90%', '+80%', '-90%']\n\n    def __init__(self, root, test_envs, hparams):\n        super(ColoredMNIST, self).__init__(root, [0.1, 0.2, 0.9],\n                                         self.color_dataset, (2, 28, 28,), 2)\n\n        self.input_shape = (2, 28, 28,)\n        self.num_classes = 2\n\n    def color_dataset(self, images, labels, environment):\n        # # Subsample 2x for computational convenience\n        # images = images.reshape((-1, 28, 28))[:, ::2, ::2]\n        # Assign a binary label based on the digit\n        labels = (labels < 5).float()\n        # Flip label with probability 0.25\n        labels = self.torch_xor_(labels,\n                                 self.torch_bernoulli_(0.25, len(labels)))\n\n        # Assign a color based on the label; flip the color with probability e\n        colors = self.torch_xor_(labels,\n                                 self.torch_bernoulli_(environment,\n                                                       len(labels)))\n        images = torch.stack([images, images], dim=1)\n        # Apply the color to the image by zeroing out the other color channel\n        images[torch.tensor(range(len(images))), (\n            1 - colors).long(), :, :] *= 0\n\n        x = images.float().div_(255.0)\n        y = labels.view(-1).long()\n\n        return TensorDataset(x, y)\n\n    def torch_bernoulli_(self, p, size):\n        return (torch.rand(size) < p).float()\n\n    def torch_xor_(self, a, b):\n        return (a - b).abs()\n\n\nclass RotatedMNIST(MultipleEnvironmentMNIST):\n    ENVIRONMENTS = ['0', '15', '30', '45', '60', '75']\n\n    def __init__(self, root, test_envs, hparams):\n        super(RotatedMNIST, self).__init__(root, [0, 15, 30, 45, 60, 75],\n                                           self.rotate_dataset, (1, 28, 28,), 10)\n\n    def rotate_dataset(self, images, labels, angle):\n        rotation = transforms.Compose([\n            transforms.ToPILImage(),\n            transforms.Lambda(lambda x: rotate(x, angle, fill=(0,),\n                interpolation=torchvision.transforms.InterpolationMode.BILINEAR)),\n            transforms.ToTensor()])\n\n        x = torch.zeros(len(images), 1, 28, 28)\n        for i in range(len(images)):\n            x[i] = rotation(images[i])\n\n        y = labels.view(-1)\n\n        return TensorDataset(x, y)\n\n\nclass MultipleEnvironmentImageFolder(MultipleDomainDataset):\n    def __init__(self, root, test_envs, augment, hparams):\n        super().__init__()\n        environments = [f.name for f in os.scandir(root) if f.is_dir()]\n        environments = sorted(environments)\n\n        transform = transforms.Compose([\n            transforms.Resize((224,224)),\n            transforms.ToTensor(),\n            transforms.Normalize(\n                mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n        ])\n\n        augment_transform = transforms.Compose([\n            # transforms.Resize((224,224)),\n            transforms.RandomResizedCrop(224, scale=(0.7, 1.0)),\n            transforms.RandomHorizontalFlip(),\n            transforms.ColorJitter(0.3, 0.3, 0.3, 0.3),\n            transforms.RandomGrayscale(),\n            transforms.ToTensor(),\n            transforms.Normalize(\n                mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n        ])\n\n        self.datasets = []\n        for i, environment in enumerate(environments):\n\n            if augment and (i not in test_envs):\n                env_transform = augment_transform\n            else:\n                env_transform = transform\n\n            path = os.path.join(root, environment)\n            env_dataset = ImageFolder(path,\n                transform=env_transform)\n\n            self.datasets.append(env_dataset)\n\n        self.input_shape = (3, 224, 224,)\n        self.num_classes = len(self.datasets[-1].classes)\n\nclass VLCS(MultipleEnvironmentImageFolder):\n    CHECKPOINT_FREQ = 300\n    ENVIRONMENTS = [\"C\", \"L\", \"S\", \"V\"]\n    def __init__(self, root, test_envs, hparams):\n        self.dir = os.path.join(root, \"VLCS/\")\n        super().__init__(self.dir, test_envs, hparams['data_augmentation'], hparams)\n\nclass PACS(MultipleEnvironmentImageFolder):\n    CHECKPOINT_FREQ = 300\n    ENVIRONMENTS = [\"A\", \"C\", \"P\", \"S\"]\n    def __init__(self, root, test_envs, hparams):\n        self.dir = os.path.join(root, \"PACS/\")\n        super().__init__(self.dir, test_envs, hparams['data_augmentation'], hparams)\n\nclass DomainNet(MultipleEnvironmentImageFolder):\n    CHECKPOINT_FREQ = 1000\n    ENVIRONMENTS = [\"clip\", \"info\", \"paint\", \"quick\", \"real\", \"sketch\"]\n    def __init__(self, root, test_envs, hparams):\n        self.dir = os.path.join(root, \"domain_net/\")\n        super().__init__(self.dir, test_envs, hparams['data_augmentation'], hparams)\n\nclass OfficeHome(MultipleEnvironmentImageFolder):\n    CHECKPOINT_FREQ = 300\n    ENVIRONMENTS = [\"A\", \"C\", \"P\", \"R\"]\n    def __init__(self, root, test_envs, hparams):\n        self.dir = os.path.join(root, \"office_home/\")\n        super().__init__(self.dir, test_envs, hparams['data_augmentation'], hparams)\n\nclass TerraIncognita(MultipleEnvironmentImageFolder):\n    CHECKPOINT_FREQ = 300\n    ENVIRONMENTS = [\"L100\", \"L38\", \"L43\", \"L46\"]\n    def __init__(self, root, test_envs, hparams):\n        self.dir = os.path.join(root, \"terra_incognita/\")\n        super().__init__(self.dir, test_envs, hparams['data_augmentation'], hparams)\n\nclass SVIRO(MultipleEnvironmentImageFolder):\n    CHECKPOINT_FREQ = 300\n    ENVIRONMENTS = [\"aclass\", \"escape\", \"hilux\", \"i3\", \"lexus\", \"tesla\", \"tiguan\", \"tucson\", \"x5\", \"zoe\"]\n    def __init__(self, root, test_envs, hparams):\n        self.dir = os.path.join(root, \"sviro/\")\n        super().__init__(self.dir, test_envs, hparams['data_augmentation'], hparams)\n\n\nclass WILDSEnvironment:\n    def __init__(\n            self,\n            wilds_dataset,\n            metadata_name,\n            metadata_value,\n            transform=None):\n        self.name = metadata_name + \"_\" + str(metadata_value)\n\n        metadata_index = wilds_dataset.metadata_fields.index(metadata_name)\n        metadata_array = wilds_dataset.metadata_array\n        subset_indices = torch.where(\n            metadata_array[:, metadata_index] == metadata_value)[0]\n\n        self.dataset = wilds_dataset\n        self.indices = subset_indices\n        self.transform = transform\n\n    def __getitem__(self, i):\n        x = self.dataset.get_input(self.indices[i])\n        if type(x).__name__ != \"Image\":\n            x = Image.fromarray(x)\n\n        y = self.dataset.y_array[self.indices[i]]\n        if self.transform is not None:\n            x = self.transform(x)\n        return x, y\n\n    def __len__(self):\n        return len(self.indices)\n\n\nclass WILDSDataset(MultipleDomainDataset):\n    INPUT_SHAPE = (3, 224, 224)\n    def __init__(self, dataset, metadata_name, test_envs, augment, hparams):\n        super().__init__()\n\n        transform = transforms.Compose([\n            transforms.Resize((224, 224)),\n            transforms.ToTensor(),\n            transforms.Normalize(\n                mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n        ])\n\n        augment_transform = transforms.Compose([\n            transforms.Resize((224, 224)),\n            transforms.RandomResizedCrop(224, scale=(0.7, 1.0)),\n            transforms.RandomHorizontalFlip(),\n            transforms.ColorJitter(0.3, 0.3, 0.3, 0.3),\n            transforms.RandomGrayscale(),\n            transforms.ToTensor(),\n            transforms.Normalize(\n                mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n        ])\n\n        self.datasets = []\n\n        for i, metadata_value in enumerate(\n                self.metadata_values(dataset, metadata_name)):\n            if augment and (i not in test_envs):\n                env_transform = augment_transform\n            else:\n                env_transform = transform\n\n            env_dataset = WILDSEnvironment(\n                dataset, metadata_name, metadata_value, env_transform)\n\n            self.datasets.append(env_dataset)\n\n        self.input_shape = (3, 224, 224,)\n        self.num_classes = dataset.n_classes\n\n    def metadata_values(self, wilds_dataset, metadata_name):\n        metadata_index = wilds_dataset.metadata_fields.index(metadata_name)\n        metadata_vals = wilds_dataset.metadata_array[:, metadata_index]\n        return sorted(list(set(metadata_vals.view(-1).tolist())))\n\n\nclass WILDSCamelyon(WILDSDataset):\n    ENVIRONMENTS = [ \"hospital_0\", \"hospital_1\", \"hospital_2\", \"hospital_3\",\n            \"hospital_4\"]\n    def __init__(self, root, test_envs, hparams):\n        dataset = Camelyon17Dataset(root_dir=root)\n        super().__init__(\n            dataset, \"hospital\", test_envs, hparams['data_augmentation'], hparams)\n\n\nclass WILDSFMoW(WILDSDataset):\n    ENVIRONMENTS = [ \"region_0\", \"region_1\", \"region_2\", \"region_3\",\n            \"region_4\", \"region_5\"]\n    def __init__(self, root, test_envs, hparams):\n        dataset = FMoWDataset(root_dir=root)\n        super().__init__(\n            dataset, \"region\", test_envs, hparams['data_augmentation'], hparams)\n\n\n## Spawrious base classes\nclass CustomImageFolder(Dataset):\n    \"\"\"\n    A class that takes one folder at a time and loads a set number of images in a folder and assigns them a specific class\n    \"\"\"\n    def __init__(self, folder_path, class_index, limit=None, transform=None):\n        self.folder_path = folder_path\n        self.class_index = class_index\n        self.image_paths = [os.path.join(folder_path, img) for img in os.listdir(folder_path) if img.endswith(('.png', '.jpg', '.jpeg'))]\n        if limit:\n            self.image_paths = self.image_paths[:limit]\n        self.transform = transform\n\n    def __len__(self):\n        return len(self.image_paths)\n\n    def __getitem__(self, index):\n        img_path = self.image_paths[index]\n        img = Image.open(img_path).convert('RGB')\n        \n        if self.transform:\n            img = self.transform(img)\n        \n        label = torch.tensor(self.class_index, dtype=torch.long)\n        return img, label\n\nclass SpawriousBenchmark(MultipleDomainDataset):\n    ENVIRONMENTS = [\"Test\", \"SC_group_1\", \"SC_group_2\"]\n    input_shape = (3, 224, 224)\n    num_classes = 4\n    class_list = [\"bulldog\", \"corgi\", \"dachshund\", \"labrador\"]\n\n    def __init__(self, train_combinations, test_combinations, root_dir, augment=True, type1=False):\n        self.type1 = type1\n        train_datasets, test_datasets = self._prepare_data_lists(train_combinations, test_combinations, root_dir, augment)\n        self.datasets = [ConcatDataset(test_datasets)] + train_datasets\n\n    # Prepares the train and test data lists by applying the necessary transformations.\n    def _prepare_data_lists(self, train_combinations, test_combinations, root_dir, augment):\n        test_transforms = transforms.Compose([\n            transforms.Resize((self.input_shape[1], self.input_shape[2])),\n            transforms.transforms.ToTensor(),\n            transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n        ])\n        \n        if augment:\n            train_transforms = transforms.Compose([\n                transforms.Resize((self.input_shape[1], self.input_shape[2])),\n                transforms.RandomHorizontalFlip(),\n                transforms.ColorJitter(0.3, 0.3, 0.3, 0.3),\n                transforms.RandomGrayscale(),\n                transforms.ToTensor(),\n                transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n            ])\n        else:\n            train_transforms = test_transforms\n\n        train_data_list = self._create_data_list(train_combinations, root_dir, train_transforms)\n        test_data_list = self._create_data_list(test_combinations, root_dir, test_transforms)\n\n        return train_data_list, test_data_list\n\n    # Creates a list of datasets based on the given combinations and transformations.\n    def _create_data_list(self, combinations, root_dir, transforms):\n        data_list = []\n        if isinstance(combinations, dict):\n            \n            # Build class groups for a given set of combinations, root directory, and transformations.\n            for_each_class_group = []\n            cg_index = 0\n            for classes, comb_list in combinations.items():\n                for_each_class_group.append([])\n                for ind, location_limit in enumerate(comb_list):\n                    if isinstance(location_limit, tuple):\n                        location, limit = location_limit\n                    else:\n                        location, limit = location_limit, None\n                    cg_data_list = []\n                    for cls in classes:\n                        path = os.path.join(root_dir, f\"{0 if not self.type1 else ind}/{location}/{cls}\")\n                        data = CustomImageFolder(folder_path=path, class_index=self.class_list.index(cls), limit=limit, transform=transforms)\n                        cg_data_list.append(data)\n                    \n                    for_each_class_group[cg_index].append(ConcatDataset(cg_data_list))\n                cg_index += 1\n\n            for group in range(len(for_each_class_group[0])):\n                data_list.append(\n                    ConcatDataset(\n                        [for_each_class_group[k][group] for k in range(len(for_each_class_group))]\n                    )\n                )\n        else:\n            for location in combinations:\n                path = os.path.join(root_dir, f\"{0}/{location}/\")\n                data = ImageFolder(root=path, transform=transforms)\n                data_list.append(data)\n\n        return data_list\n\n    \n    # Buils combination dictionary for o2o datasets\n    def build_type1_combination(self,group,test,filler):\n        total = 3168\n        counts = [int(0.97*total),int(0.87*total)]\n        combinations = {}\n        combinations['train_combinations'] = {\n            ## correlated class\n            (\"bulldog\",):[(group[0],counts[0]),(group[0],counts[1])],\n            (\"dachshund\",):[(group[1],counts[0]),(group[1],counts[1])],\n            (\"labrador\",):[(group[2],counts[0]),(group[2],counts[1])],\n            (\"corgi\",):[(group[3],counts[0]),(group[3],counts[1])],\n            ## filler\n            (\"bulldog\",\"dachshund\",\"labrador\",\"corgi\"):[(filler,total-counts[0]),(filler,total-counts[1])],\n        }\n        ## TEST\n        combinations['test_combinations'] = {\n            (\"bulldog\",):[test[0], test[0]],\n            (\"dachshund\",):[test[1], test[1]],\n            (\"labrador\",):[test[2], test[2]],\n            (\"corgi\",):[test[3], test[3]],\n        }\n        return combinations\n\n    # Buils combination dictionary for m2m datasets\n    def build_type2_combination(self,group,test):\n        total = 3168\n        counts = [total,total]\n        combinations = {}\n        combinations['train_combinations'] = {\n            ## correlated class\n            (\"bulldog\",):[(group[0],counts[0]),(group[1],counts[1])],\n            (\"dachshund\",):[(group[1],counts[0]),(group[0],counts[1])],\n            (\"labrador\",):[(group[2],counts[0]),(group[3],counts[1])],\n            (\"corgi\",):[(group[3],counts[0]),(group[2],counts[1])],\n        }\n        combinations['test_combinations'] = {\n            (\"bulldog\",):[test[0], test[1]],\n            (\"dachshund\",):[test[1], test[0]],\n            (\"labrador\",):[test[2], test[3]],\n            (\"corgi\",):[test[3], test[2]],\n        }\n        return combinations\n\n## Spawrious classes for each Spawrious dataset \nclass SpawriousO2O_easy(SpawriousBenchmark):\n    def __init__(self, root_dir, test_envs, hparams):\n        group = [\"desert\",\"jungle\",\"dirt\",\"snow\"]\n        test = [\"dirt\",\"snow\",\"desert\",\"jungle\"]\n        filler = \"beach\"\n        combinations = self.build_type1_combination(group,test,filler)\n        super().__init__(combinations['train_combinations'], combinations['test_combinations'], root_dir, hparams['data_augmentation'], type1=True)\n\nclass SpawriousO2O_medium(SpawriousBenchmark):\n    def __init__(self, root_dir, test_envs, hparams):\n        group = ['mountain', 'beach', 'dirt', 'jungle']\n        test = ['jungle', 'dirt', 'beach', 'snow']\n        filler = \"desert\"\n        combinations = self.build_type1_combination(group,test,filler)\n        super().__init__(combinations['train_combinations'], combinations['test_combinations'], root_dir, hparams['data_augmentation'], type1=True)\n\nclass SpawriousO2O_hard(SpawriousBenchmark):\n    def __init__(self, root_dir, test_envs, hparams):\n        group = ['jungle', 'mountain', 'snow', 'desert']\n        test = ['mountain', 'snow', 'desert', 'jungle']\n        filler = \"beach\"\n        combinations = self.build_type1_combination(group,test,filler)\n        super().__init__(combinations['train_combinations'], combinations['test_combinations'], root_dir, hparams['data_augmentation'], type1=True)\n\nclass SpawriousM2M_easy(SpawriousBenchmark):\n    def __init__(self, root_dir, test_envs, hparams):\n        group = ['desert', 'mountain', 'dirt', 'jungle']\n        test = ['dirt', 'jungle', 'mountain', 'desert']\n        combinations = self.build_type2_combination(group,test)\n        super().__init__(combinations['train_combinations'], combinations['test_combinations'], root_dir, hparams['data_augmentation']) \n\nclass SpawriousM2M_medium(SpawriousBenchmark):\n    def __init__(self, root_dir, test_envs, hparams):\n        group = ['beach', 'snow', 'mountain', 'desert']\n        test = ['desert', 'mountain', 'beach', 'snow']\n        combinations = self.build_type2_combination(group,test)\n        super().__init__(combinations['train_combinations'], combinations['test_combinations'], root_dir, hparams['data_augmentation'])\n        \nclass SpawriousM2M_hard(SpawriousBenchmark):\n    ENVIRONMENTS = [\"Test\",\"SC_group_1\",\"SC_group_2\"]\n    def __init__(self, root_dir, test_envs, hparams):\n        group = [\"dirt\",\"jungle\",\"snow\",\"beach\"]\n        test = [\"snow\",\"beach\",\"dirt\",\"jungle\"]\n        combinations = self.build_type2_combination(group,test)\n        super().__init__(combinations['train_combinations'], combinations['test_combinations'], root_dir, hparams['data_augmentation'])"
  },
  {
    "path": "transopt/benchmark/HPOOOD/wide_resnet.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\n\"\"\"\nFrom https://github.com/meliketoy/wide-resnet.pytorch\n\"\"\"\n\nimport sys\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\nfrom torch.autograd import Variable\n\n\ndef conv3x3(in_planes, out_planes, stride=1):\n    return nn.Conv2d(\n        in_planes,\n        out_planes,\n        kernel_size=3,\n        stride=stride,\n        padding=1,\n        bias=True)\n\n\ndef conv_init(m):\n    classname = m.__class__.__name__\n    if classname.find('Conv') != -1:\n        init.xavier_uniform_(m.weight, gain=np.sqrt(2))\n        init.constant_(m.bias, 0)\n    elif classname.find('BatchNorm') != -1:\n        init.constant_(m.weight, 1)\n        init.constant_(m.bias, 0)\n\n\nclass wide_basic(nn.Module):\n    def __init__(self, in_planes, planes, dropout_rate, stride=1):\n        super(wide_basic, self).__init__()\n        self.bn1 = nn.BatchNorm2d(in_planes)\n        self.conv1 = nn.Conv2d(\n            in_planes, planes, kernel_size=3, padding=1, bias=True)\n        self.dropout = nn.Dropout(p=dropout_rate)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.conv2 = nn.Conv2d(\n            planes, planes, kernel_size=3, stride=stride, padding=1, bias=True)\n\n        self.shortcut = nn.Sequential()\n        if stride != 1 or in_planes != planes:\n            self.shortcut = nn.Sequential(\n                nn.Conv2d(\n                    in_planes, planes, kernel_size=1, stride=stride,\n                    bias=True), )\n\n    def forward(self, x):\n        out = self.dropout(self.conv1(F.relu(self.bn1(x))))\n        out = self.conv2(F.relu(self.bn2(out)))\n        out += self.shortcut(x)\n\n        return out\n\n\nclass Wide_ResNet(nn.Module):\n    \"\"\"Wide Resnet with the softmax layer chopped off\"\"\"\n    def __init__(self, input_shape, depth, widen_factor, dropout_rate):\n        super(Wide_ResNet, self).__init__()\n        self.in_planes = 16\n\n        assert ((depth - 4) % 6 == 0), 'Wide-resnet depth should be 6n+4'\n        n = (depth - 4) / 6\n        k = widen_factor\n\n        # print('| Wide-Resnet %dx%d' % (depth, k))\n        nStages = [16, 16 * k, 32 * k, 64 * k]\n\n        self.conv1 = conv3x3(input_shape[0], nStages[0])\n        self.layer1 = self._wide_layer(\n            wide_basic, nStages[1], n, dropout_rate, stride=1)\n        self.layer2 = self._wide_layer(\n            wide_basic, nStages[2], n, dropout_rate, stride=2)\n        self.layer3 = self._wide_layer(\n            wide_basic, nStages[3], n, dropout_rate, stride=2)\n        self.bn1 = nn.BatchNorm2d(nStages[3], momentum=0.9)\n\n        self.n_outputs = nStages[3]\n\n    def _wide_layer(self, block, planes, num_blocks, dropout_rate, stride):\n        strides = [stride] + [1] * (int(num_blocks) - 1)\n        layers = []\n\n        for stride in strides:\n            layers.append(block(self.in_planes, planes, dropout_rate, stride))\n            self.in_planes = planes\n\n        return nn.Sequential(*layers)\n\n    def forward(self, x):\n        out = self.conv1(x)\n        out = self.layer1(out)\n        out = self.layer2(out)\n        out = self.layer3(out)\n        out = F.relu(self.bn1(out))\n        out = F.avg_pool2d(out, 8)\n        return out[:, :, 0, 0]\n"
  },
  {
    "path": "transopt/benchmark/RL/LunarlanderBenchmark.py",
    "content": "import gym\nimport logging\nimport random\nimport numpy as np\nimport ConfigSpace as CS\nimport matplotlib.pyplot as plt\nfrom scipy.stats import pearsonr, spearmanr\nfrom gplearn.genetic import SymbolicRegressor\nfrom typing import Union, Dict\n\nfrom transopt.benchmark.problem_base import NonTabularProblem\nfrom agent.registry import benchmark_register\n\n\nlogger = logging.getLogger(\"LunarLanderBenchmark\")\n\n# 计算两组数据的 Pearson 相关系数和 p 值\n\n\ndef lunar_lander_simulation(w, print_reward=False, seed=1, dimension=12):\n    total_reward = 0.0\n    steps = 0\n    env_name = \"LunarLander-v2\"\n    env = gym.make(env_name)\n    s = env.reset(seed=seed)[0]\n    while True:\n        if dimension == 5:\n            a = heuristic_controller5d(s, w, is_continuous=False)\n        # elif dimension == 6:\n        #     a = heuristic_controller6d(s, w, is_continuous=False)\n        # elif dimension == 8:\n        #     a = heuristic_controller8d(s, w, is_continuous=False)\n        elif dimension == 10:\n            a = heuristic_controller10d(s, w, is_continuous=False)\n        else:\n            a = heuristic_controller(s[0], w)\n        s, r, done, info, _ = env.step(a)\n        total_reward += r\n        steps += 1\n        if done:\n            break\n    if print_reward:\n        print(f\"Total reward: {total_reward}\")\n    return total_reward\n\n\ndef heuristic_controller(s, w, is_continuous=True):\n    # w is the array of controller parameters of shape (1, 12)\n    angle_target = s[0] * w[0] + s[2] * w[1]\n    if angle_target > w[2]:\n        angle_target = w[2]\n    if angle_target < w[-2]:\n        angle_target = -w[2]\n    hover_target = w[3] * np.abs(s[0])\n    angle_todo = (angle_target - s[4]) * w[4] - (s[5]) * w[5]\n    hover_todo = (hover_target - s[1]) * w[6] - (s[3]) * w[7]\n    if s[6] or s[7]:\n        angle_todo = w[8]\n        hover_todo = -(s[3]) * w[9]\n    if is_continuous:\n        a = np.array([hover_todo * 20 - 1, angle_todo * 20])\n        a = np.clip(a, -1, +1)\n    else:\n        a = 0\n        if hover_todo > np.abs(angle_todo) and hover_todo > w[10]:\n            a = 2\n        elif angle_todo < -w[11]:\n            a = 3\n        elif angle_todo > +w[11]:\n            a = 1\n    return a\n\n\ndef heuristic_controller5d(s, w, is_continuous=True):\n    # w is the array of controller parameters of shape (1, 12)\n    angle_target = s[0] * w[0] + s[2] * 1.0\n    if angle_target > 0.4:\n        angle_target = 0.4\n    if angle_target < -0.4:\n        angle_target = -0.4\n    hover_target = w[1] * np.abs(s[0])\n    angle_todo = (angle_target - s[4]) * w[2] - (s[5]) * w[3]\n    hover_todo = (hover_target - s[1]) * w[4] - (s[3]) * 0.5\n    if s[6] or s[7]:\n        angle_todo = 0\n        hover_todo = (\n            -(s[3]) * 0.5\n        )  # override to reduce fall speed, that's all we need after contact\n\n    if is_continuous:\n        a = np.array([hover_todo * 20 - 1, angle_todo * 20])\n        a = np.clip(a, -1, +1)\n    else:\n        a = 0\n        if hover_todo > np.abs(angle_todo) and hover_todo > 0.05:\n            a = 2\n        elif angle_todo < -0.05:\n            a = 3\n        elif angle_todo > +0.05:\n            a = 1\n    return a\n\n\n#\n# def heuristic_controller6d(s, w, is_continuous=True):\n#     # w is the array of controller parameters of shape (1, 12)\n#     angle_target = s[0] * w[0] + s[2] *  w[1]\n#     if angle_target > 0.4:\n#         angle_target = 0.4\n#     if angle_target < -0.4:\n#         angle_target = -0.4\n#     hover_target = w[2] * np.abs(s[0])\n#     angle_todo = (angle_target - s[4]) * w[3] - (s[5]) * w[4]\n#     hover_todo = (hover_target - s[1]) * w[5] - (s[3]) * 0.5\n#     if s[6] or s[7]:\n#         angle_todo = 0\n#         hover_todo = (\n#                 -(s[3]) * 0.5\n#         )  # override to reduce fall speed, that's all we need after contact\n#\n#     if is_continuous:\n#         a = np.array([hover_todo * 20 - 1, angle_todo * 20])\n#         a = np.clip(a, -1, +1)\n#     else:\n#         a = 0\n#         if hover_todo > np.abs(angle_todo) and hover_todo > 0.05:\n#             a = 2\n#         elif angle_todo < -0.05:\n#             a = 3\n#         elif angle_todo > +0.05:\n#             a = 1\n#     return a\n#\n#\n# def heuristic_controller8d(s, w, is_continuous=True):\n#     # w is the array of controller parameters of shape (1, 12)\n#     angle_target = s[0] * w[0] + s[2] * w[1]\n#     if angle_target > w[2]:\n#         angle_target = w[2]\n#     if angle_target < -w[2]:\n#         angle_target = -w[2]\n#     hover_target = w[3] * np.abs(s[0])\n#     angle_todo = (angle_target - s[4]) * w[4] - (s[5]) * w[5]\n#     hover_todo = (hover_target - s[1]) * w[6] - (s[3]) * w[7]\n#     if s[6] or s[7]:\n#         angle_todo = 0\n#         hover_todo = (\n#             -(s[3]) * 0.5\n#         )  # override to reduce fall speed, that's all we need after contact\n#\n#     if is_continuous:\n#         a = np.array([hover_todo * 20 - 1, angle_todo * 20])\n#         a = np.clip(a, -1, +1)\n#     else:\n#         a = 0\n#         if hover_todo > np.abs(angle_todo) and hover_todo > 0.05:\n#             a = 2\n#         elif angle_todo < -0.05:\n#             a = 3\n#         elif angle_todo > +0.05:\n#             a = 1\n#     return a\n#\n\n\ndef heuristic_controller10d(s, w, is_continuous=True):\n    # w is the array of controller parameters of shape (1, 12)\n    angle_target = s[0] * w[0] + s[2] * w[1]\n    if angle_target > w[2]:\n        angle_target = w[2]\n    if angle_target < -w[2]:\n        angle_target = -w[2]\n    hover_target = w[3] * np.abs(s[0])\n    angle_todo = (angle_target - s[4]) * w[4] - (s[5]) * w[5]\n    hover_todo = (hover_target - s[1]) * w[6] - (s[3]) * w[7]\n    if s[6] or s[7]:\n        angle_todo = w[8]\n        hover_todo = -(s[3]) * w[9]\n    if is_continuous:\n        a = np.array([hover_todo * 20 - 1, angle_todo * 20])\n        a = np.clip(a, -1, +1)\n    else:\n        a = 0\n        if hover_todo > np.abs(angle_todo) and hover_todo > 0.05:\n            a = 2\n        elif angle_todo < -0.05:\n            a = 3\n        elif angle_todo > +0.05:\n            a = 1\n    return a\n\n\ndef vanilla_heuristic(s, is_continuous=False):\n    angle_targ = s[0] * 0.5 + s[2] * 1.0  # angle should point towards center\n    if angle_targ > 0.4:\n        angle_targ = 0.4  # more than 0.4 radians (22 degrees) is bad\n    if angle_targ < -0.4:\n        angle_targ = -0.4\n    hover_targ = 0.55 * np.abs(\n        s[0]\n    )  # target y should be proportional to horizontal offset\n\n    angle_todo = (angle_targ - s[4]) * 0.5 - (s[5]) * 1.0\n    hover_todo = (hover_targ - s[1]) * 0.5 - (s[3]) * 0.5\n\n    if s[6] or s[7]:  # legs have contact\n        angle_todo = 0\n        hover_todo = (\n            -(s[3]) * 0.5\n        )  # override to reduce fall speed, that's all we need after contact\n\n    if is_continuous:\n        a = np.array([hover_todo * 20 - 1, -angle_todo * 20])\n        a = np.clip(a, -1, +1)\n    else:\n        a = 0\n        if hover_todo > np.abs(angle_todo) and hover_todo > 0.05:\n            a = 2\n        elif angle_todo < -0.05:\n            a = 3\n        elif angle_todo > +0.05:\n            a = 1\n    return a\n\n\n@benchmark_register(\"Lunar\")\nclass LunarlanderBenchmark(NonTabularProblem):\n    \"\"\"\n    DixonPrice function\n\n    :param sd: standard deviation, to generate noisy evaluations of the function.\n    \"\"\"\n\n    lunar_seeds = [2, 3, 4, 5, 10, 14, 15, 19]\n\n    def __init__(self, task_name, task_id, budget, seed, task_type=\"non-tabular\"):\n        super(LunarlanderBenchmark, self).__init__(\n            task_name=task_name, seed=seed, task_type=task_type, budget=budget\n        )\n        self.lunar_seed = LunarlanderBenchmark.lunar_seeds[task_id]\n\n    def objective_function(\n        self,\n        configuration: Union[CS.Configuration, Dict],\n        fidelity: Union[Dict, CS.Configuration, None] = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([configuration[k] for idx, k in enumerate(configuration.keys())])\n\n        y = lunar_lander_simulation(X, seed=self.lunar_seed, dimension=self.input_dim)\n        return {\"function_value\": float(y), \"info\": {\"fidelity\": fidelity}}\n\n    def get_configuration_space(\n        self, seed: Union[int, None] = None\n    ) -> CS.ConfigurationSpace:\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all parameters for\n        the XGBoost Model\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n        seed = seed if seed is not None else np.random.randint(1, 100000)\n        cs = CS.ConfigurationSpace(seed=seed)\n        cs.add_hyperparameters(\n            [\n                CS.UniformFloatHyperparameter(f\"x{i}\", lower=0, upper=2.0)\n                for i in range(10)\n            ]\n        )\n\n        return cs\n\n    def get_fidelity_space(\n        self, seed: Union[int, None] = None\n    ) -> CS.ConfigurationSpace:\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all fidelity parameters for\n        the XGBoost Benchmark\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n        seed = seed if seed is not None else np.random.randint(1, 100000)\n        fidel_space = CS.ConfigurationSpace(seed=seed)\n\n        fidel_space.add_hyperparameters([])\n\n        return fidel_space\n\n    def get_meta_information(self) -> Dict:\n        print(1)\n        return {}\n\n\nif __name__ == \"__main__\":\n    seed_list = [2, 3, 4, 5, 10, 14, 15, 19]\n    result_vectors = []\n    for seed in seed_list:\n        # 设置随机种子\n        np.random.seed(seed)\n        # 执行函数 100 次并记录结果\n        sample_number = 100\n        dim = 10\n\n        fixed_dims = {0: 2.0, 1: 1.8, 2: 0.01, 4: 0.01, 5: 0.01}\n\n        # Generate random data for other dimensions\n        samples_x = np.random.uniform(-1, 1, (sample_number, dim))\n\n        # Assign fixed values to specified dimensions\n        # for dim, value in fixed_dims.items():\n        #     samples_x[:, dim] = value\n\n        # samples_x= np.random.uniform(0, 2, size=(sample_number, dim))\n        # samples_x = np.sort(samples_x, axis=0)\n        # samples_x =  np.random.uniform(0, 2, size=(100, 10))\n        bench = LunarlanderBenchmark(task_name=\"lunar\", task_id=0, seed=0, budget=10000)\n        xx = {}\n        for i in range(10):\n            xx[f\"x{i}\"] = samples_x[0][i]\n        result = bench.f(xx)\n        print(result)\n        # 将结果转换为 100*1 的向量\n        result_vector = np.array(result).reshape(-1, 1)\n\n        # 将结果向量存储到列表中\n        result_vectors.append(result_vector)\n\n        plt.figure()\n        plt.clf()\n        # 绘制采样结果的分布图\n        plt.hist(result, bins=30, density=True, alpha=0.7)\n        # # 添加横纵轴标签和标题\n        plt.xlabel(\"Value\")\n        plt.ylabel(\"Density\")\n        plt.title(f\"Distribution of Sampled Function, seed:{seed}\")\n        plt.show()\n        # plt.savefig(f'seed_{seed}')\n\n        # 训练symbolic regressor\n        est_gp = SymbolicRegressor(\n            population_size=5000,\n            generations=20,\n            stopping_criteria=0.01,\n            p_crossover=0.7,\n            p_subtree_mutation=0.1,\n            p_hoist_mutation=0.05,\n            p_point_mutation=0.1,\n            max_samples=0.9,\n            verbose=1,\n            parsimony_coefficient=0.01,\n            random_state=0,\n        )\n        est_gp.fit(samples_x, result_vector)\n\n        print(\"最佳程序：\", est_gp._program)\n\n    # 对每个结果向量进行相关性分析\n    for i, vector1 in enumerate(result_vectors):\n        for j, vector2 in enumerate(result_vectors):\n            if i != j:\n                correlation, p = spearmanr(vector1.flatten(), vector2.flatten())\n                print(\n                    f\"Correlation between seed {seed_list[i]} and seed {seed_list[j]}: {correlation},p_{p}\"\n                )\n"
  },
  {
    "path": "transopt/benchmark/RL/__init__.py",
    "content": ""
  },
  {
    "path": "transopt/benchmark/__init__.py",
    "content": "# from transopt.benchmark.instantiate_problems import InstantiateProblems\n"
  },
  {
    "path": "transopt/benchmark/instantiate_problems.py",
    "content": "from transopt.agent.registry import problem_registry\n# from transopt.benchmark.problem_base.tab_problem import TabularProblem\nfrom transopt.benchmark.problem_base.transfer_problem import TransferProblem, RemoteTransferOptBenchmark\n\n\ndef InstantiateProblems(\n    tasks: dict = None, seed: int = 0, remote: bool = False, server_url: str = None\n) -> TransferProblem:\n    tasks = tasks or {}\n\n    if remote:\n        if server_url is None:\n            raise ValueError(\"Server URL must be provided for remote testing.\")\n        transfer_problems = RemoteTransferOptBenchmark(server_url, seed)\n    else:\n        transfer_problems = TransferProblem(seed)\n\n    for task_name, task_params in tasks.items():\n        budget = task_params.get(\"budget\", 0)\n        workloads = task_params.get(\"workloads\", [])\n        budget_type = task_params.get(\"budget_type\", 'Num_FEs')\n        params = task_params.get(\"params\", {})\n\n\n        problem_cls = problem_registry[task_name]\n        if problem_cls is None:\n            raise KeyError(f\"Task '{task_name}' not found in the problem registry.\")\n\n        for idx, workload in enumerate(workloads):\n            problem = problem_cls(\n                task_name=f\"{task_name}\",\n                task_id=idx,\n                budget_type=budget_type,\n                budget=budget,\n                seed=seed,\n                workload=workload,\n                params=params,\n            )\n            transfer_problems.add_task(problem)\n\n    return transfer_problems\n"
  },
  {
    "path": "transopt/benchmark/problem_base/__init__.py",
    "content": "# from benchmark.problem_base.base import ProblemBase\n# from benchmark.problem_base.non_tab_problem import NonTabularProblem\n# from benchmark.problem_base.tab_problem import TabularProblem\n# from benchmark.problem_base.transfer_problem import TransferProblem, RemoteTransferOptBenchmark\n"
  },
  {
    "path": "transopt/benchmark/problem_base/base.py",
    "content": "\"\"\" Base-class of all benchmarks \"\"\"\n\nimport abc\nimport logging\n\nfrom numpy.random.mtrand import RandomState as RandomState\nfrom transopt.space.search_space import SearchSpace\nfrom transopt.space.fidelity_space import FidelitySpace\nimport numpy as np\nfrom typing import Union, Dict\nfrom transopt.space.variable import *\n\nlogger = logging.getLogger(\"AbstractProblem\")\n\n\nclass ProblemBase(abc.ABC):\n    def __init__(self, seed: Union[int, np.random.RandomState, None] = None, **kwargs):\n        \"\"\"\n        Interface for benchmarks.\n\n        A benchmark consists of two building blocks, the target function and\n        the configuration space. Furthermore it can contain additional\n        benchmark-specific information such as the location and the function\n        value of the global optima.\n        New benchmarks should be derived from this base class or one of its\n        child classes.\n\n        Parameters\n        ----------\n        seed: int, np.random.RandomState, None\n            The default random state for the benchmark. If type is int, a\n            np.random.RandomState with seed `rng` is created. If type is None,\n            create a new random state.\n        \"\"\"\n\n        self.seed = seed\n        self.fidelity_space = self.get_fidelity_space()\n        self.objective_info = self.get_objectives()\n        self.problem_type = self.get_problem_type()\n        self.configuration_space = self.get_configuration_space()\n        \n        self.input_dim = len(self.configuration_space.get_hyperparameter_names())\n        self.num_objective = len(self.objective_info)\n\n    def f(self, configuration, fidelity=None, seed=None, **kwargs) -> Dict:\n        # Check validity of configuration and fidelity before evaluation\n        self.check_validity(configuration, fidelity)\n\n        # Delegate to the specific evaluation method implemented by subclasses\n        return self.objective_function(configuration, fidelity, seed, **kwargs)\n\n    @abc.abstractmethod\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        \"\"\"Implement this method in subclasses to define specific evaluation logic.\"\"\"\n        raise NotImplementedError\n\n\n    @staticmethod\n    @abc.abstractmethod\n    def get_configuration_space(self) -> SearchSpace:\n        \"\"\"Defines the configuration space for each benchmark.\n        Parameters\n        ----------\n        seed: int, None\n            Seed for the configuration space.\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n            A valid configuration space for the benchmark's parameters\n        \"\"\"\n        raise NotImplementedError()\n\n    def check_validity(self, configuration, fidelity):\n        # Check if each configuration key and value is valid\n        for key, value in configuration.items():\n            if key not in self.configuration_space.ranges:\n                raise ValueError(f\"Configuration key {key} is not valid.\")\n\n            if type(self.configuration_space.get_design_variable(key)) is Categorical:\n                if not (value in self.configuration_space.get_design_variable(key).categories):\n                    raise ValueError(\n                        f\"Value of {key}={value} is out of allowed range {range}.\"\n                    )\n            else:\n                design_range = self.configuration_space.get_design_variable(key).range\n                if not (design_range[0] <= value <= design_range[1]):\n                    raise ValueError(\n                        f\"Value of {key}={value} is out of allowed range {design_range}.\"\n                    )\n\n        if fidelity is None:\n            return\n\n        # Check if each fidelity key and value is valid\n        for key, value in fidelity.items():\n            if key not in self.fidelity_space.ranges:\n                raise ValueError(f\"Fidelity key {key} is not valid.\")\n            range = self.fidelity_space.ranges[key]\n            if not (range[0] <= value <= range[1]):\n                raise ValueError(\n                    f\"Value of {key}={value} is out of allowed range {range}.\"\n                )\n\n    def __call__(self, configuration: Dict, **kwargs) -> float:\n        \"\"\"Provides interface to use, e.g., SciPy optimizers\"\"\"\n        return self.f(configuration, **kwargs)[\"function_value\"]\n\n\n    \n\n\n    @abc.abstractmethod\n    def get_fidelity_space(self) -> FidelitySpace:\n        \"\"\"Defines the available fidelity parameters as a \"fidelity space\" for each benchmark.\n        Parameters\n        ----------\n        seed: int, None\n            Seed for the fidelity space.\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n            A valid configuration space for the benchmark's fidelity parameters\n        \"\"\"\n        raise NotImplementedError()\n\n    @abc.abstractmethod\n    def get_objectives(self) -> dict:\n        \"\"\"Defines the available fidelity parameters as a \"fidelity space\" for each benchmark.\n        Parameters\n        ----------\n        seed: int, None\n            Seed for the fidelity space.\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n            A valid configuration space for the benchmark's fidelity parameters\n        \"\"\"\n        raise NotImplementedError()\n    \n    \n    @property\n    @abc.abstractmethod\n    def problem_type(self):\n        raise NotImplementedError()\n    @property\n    @abc.abstractmethod\n    def num_objectives(self):\n        raise NotImplementedError()  \n    @property\n    @abc.abstractmethod\n    def num_variables(self):\n        raise NotImplementedError()  \n\n"
  },
  {
    "path": "transopt/benchmark/problem_base/non_tab_problem.py",
    "content": "\"\"\" Base-class of configuration optimization benchmarks \"\"\"\nimport json\nimport logging\nimport os\nfrom pathlib import Path\nfrom typing import Dict, List, Union\n\nimport numpy as np\n\nfrom transopt.benchmark.problem_base.base import ProblemBase\n\nlogger = logging.getLogger(\"NonTabularProblem\")\n\n\nimport abc\n\n\nclass NonTabularProblem(ProblemBase):\n    def __init__(\n        self,\n        task_name: str,\n        budget_type,\n        budget: int,\n        workload,\n        seed: Union[int, np.random.RandomState, None] = None,\n        **kwargs,\n    ):\n        self.task_name = task_name\n        self.budget = budget\n        self.workload = workload\n        self.lock_flag = False\n        self.budget_type = budget_type\n\n        super(NonTabularProblem, self).__init__(seed, **kwargs)\n\n    def get_budget_type(self) -> str:\n        \"\"\"Provides the budget type about the benchmark.\n\n        Returns\n        -------\n        str\n            some human-readable information\n\n        \"\"\"\n        return self.budget_type\n\n    def get_budget(self) -> int:\n        \"\"\"Provides the function evaluations number about the benchmark.\n\n        Returns\n        -------\n        int\n            some human-readable information\n\n        \"\"\"\n        return self.budget\n\n    def get_name(self) -> str:\n        \"\"\"Provides the task name about the benchmark.\n\n        Returns\n        -------\n        str\n            some human-readable information\n\n        \"\"\"\n        return self.task_name\n\n    def get_type(self) -> str:\n        \"\"\"Provides the task type about the benchmark.\n\n        Returns\n        -------\n        str\n            some human-readable information\n\n        \"\"\"\n        return self.problem_type\n\n    def get_input_dim(self) -> int:\n        \"\"\"Provides the input dimension about the benchmark.\n\n        Returns\n        -------\n        int\n            some human-readable information\n\n        \"\"\"\n        return self.num_variables\n\n    def get_objective_num(self) -> int:\n        return self.num_objectives\n\n    def lock(self):\n        self.lock_flag = True\n\n    def unlock(self):\n        self.lock_flag = False\n\n    def get_lock_state(self) -> bool:\n        return self.lock_flag\n    \n    @property\n    @abc.abstractmethod\n    def workloads(self):\n        raise NotImplementedError()\n    \n    @property\n    @abc.abstractmethod\n    def fidelity(self):\n        raise NotImplementedError()"
  },
  {
    "path": "transopt/benchmark/problem_base/tab_problem.py",
    "content": "import logging\nimport os\nfrom pathlib import Path\nfrom typing import Dict, List, Union\nfrom urllib.parse import urlparse\n\nimport numpy as np\nimport pandas as pds\n\nfrom transopt.benchmark.problem_base.base import ProblemBase\nfrom transopt.utils.encoding import multitarget_encoding, target_encoding\nfrom transopt.utils.Read import read_file\n\nlogger = logging.getLogger(\"TabularProblem\")\n\n\n\nclass TabularProblem(ProblemBase):\n    def __init__(\n            self,\n            task_name: str,\n            task_type: str,\n            budget: int,\n            workload,\n            path: str = None,\n            seed: Union[int, np.random.RandomState, None] = None,\n            space_info: Dict = None,\n            **kwargs,\n    ):\n\n        super(TabularProblem, self).__init__(task_name= task_name, task_type=task_type, budget=budget,workload=workload, seed=seed, **kwargs)\n        self.path = path\n\n        parsed = urlparse(path)\n        if parsed.scheme and parsed.netloc:\n            return \"URL\"\n        # If the string is a valid file path\n        elif os.path.exists(path) or os.path.isabs(path):\n            dir_path = Path(path)\n            workload_path = dir_path / workload\n            data = read_file(workload_path)\n            unnamed_columns = [col for col in data.columns if \"Unnamed\" in col]\n            # delete the unnamed column\n            data.drop(unnamed_columns, axis=1, inplace=True)\n\n            para_names = [value for value in data.columns]\n            if space_info is None or not isinstance(space_info, dict):\n                self.space_info = {}\n            else:\n                self.space_info = space_info\n\n            if 'input_dim' not in self.space_info and 'num_objective' not in self.space_info:\n                self.space_info['input_dim'] = len(para_names) - 1\n\n                self.space_info['num_objective'] = len(para_names) - self.space_info['input_dim']\n            elif 'input_dim' in self.space_info and 'num_objective' in self.space_info:\n                pass\n            else:\n                if 'num_objective' in self.space_info:\n                    self.space_info['input_dim'] = len(para_names) - self.space_info['num_objective']\n\n                if 'input_dim' in self.space_info:\n                        self.space_info['num_objective'] = len(para_names) - self.space_info['input_dim']\n\n            self.input_dim = self.space_info['input_dim']\n            self.num_objective = self.space_info['num_objective']\n            self.encodings = {}\n            for i in range(self.num_objective):\n                data[f\"function_value_{i+1}\"] = data[para_names[self.input_dim+i]]\n\n            if 'variables' not in self.space_info:\n                self.space_info['variables'] = {}\n                for i in range(self.space_info['input_dim']):\n                    var_name = para_names[i]\n                    max_value = data[var_name].max()\n                    min_value = data[var_name].min()\n                    contains_decimal = False\n                    contains_str = False\n                    if data[var_name][1:].nunique() > 10:\n                        for item in data[var_name][1:]:\n                            if isinstance(item, str):\n                                contains_str = True\n                            if int(item) - item != 0:\n                                contains_decimal = True\n                                break  # 如果找到小数，无需继续检查\n                        if contains_decimal:\n                            var_type =  'continuous'\n                            self.space_info['variables'][var_name] = {'bounds': [min_value, max_value],\n                                                                      'type': var_type}\n                        elif contains_str:\n                            var_type = 'categorical'\n                            data[var_name] = data[var_name].astype(str)\n\n                            self.space_info['variables'][var_name] = {'bounds': [0, len(data[var_name][1:].unique()) - 1] ,\n                                                                      'type': var_type}\n                            if self.num_objective > 1:\n                                self.cat_mapping = multitarget_encoding(data, var_name, [f'function_value_{i+1}' for i in range(self.num_objective)])\n                            else:\n                                self.cat_mapping = target_encoding(data, var_name, 'function_value_1')\n\n                        else:\n                            var_type = 'integer'\n                            data[var_name] = data[var_name].astype(int)\n                            self.space_info['variables'][var_name] = {'bounds': [min_value, max_value],\n                                                                      'type': var_type}\n                    else:\n                        var_type = 'categorical'\n                        data[var_name] = data[var_name].astype(str)\n\n\n                        if self.num_objective > 1:\n                            self.cat_mapping = multitarget_encoding(data, var_name, [f'function_value_{i + 1}' for i in\n                                                                           range(self.num_objective)])\n                        else:\n                            self.cat_mapping = target_encoding(data, var_name, 'function_value_1')\n                        max_key = max(self.cat_mapping.keys())\n\n                        # 找出最小的键\n                        min_key = min(self.cat_mapping.keys())\n                        self.space_info['variables'][var_name] = {'bounds': [min_key, max_key],\n                                                                  'type': var_type}\n\n\n            data['config'] = data.apply(lambda row: row[:self.input_dim].tolist(), axis=1)\n            data[\"config_s\"] = data[\"config\"].astype(str)\n        else:\n            raise ValueError(\"Unknown path type, only accept url or file path\")\n\n        \n        self.var_range = self.get_configuration_bound()\n        self.var_type = self.get_configuration_type()\n        self.unqueried_data = data\n        self.queried_data = pds.DataFrame(columns=data.columns)\n\n    def f(\n            self,\n            configuration: Union[Dict, None],\n            fidelity: Union[Dict, None] = None,\n            **kwargs,\n    ) -> Dict:\n\n        results = self.objective_function(\n            configuration=configuration, fidelity=fidelity, seed=self.seed\n        )\n\n        return results\n\n    def objective_function(\n            self,\n            configuration: Union[ Dict],\n            fidelity: Union[Dict, None] = None,\n            seed: Union[np.random.RandomState, int, None] = None,\n            **kwargs,\n    ) -> Dict:\n        c = {}\n        for k in configuration.keys():\n            if self.space_info['variables'][k]['type'] == 'categorical':\n                c[k] = self.cat_mapping[configuration[k]]\n            else:\n                c[k] = configuration[k]\n\n        X = str([configuration[k] for idx, k in enumerate(configuration.keys())])\n        data = self.unqueried_data[self.unqueried_data['config_s'] == X]\n\n        if not data.empty:\n            self.unqueried_data.drop(data.index, inplace=True)\n            self.queried_data = pds.concat([self.queried_data, data], ignore_index=True)\n        else:\n            raise ValueError(f\"Configuration {X} not exist in oracle\")\n\n        res = {}\n        for i in range(self.num_objective):\n            res[f\"function_value_{i+1}\"] = float(data['fitness'])\n        res[\"info\"] = {\"fidelity\": fidelity}\n        return res\n\n    def sample_dataframe(key, df, p_remove=0.):\n        \"\"\"Randomly sample dataframe by the removal percentage.\"\"\"\n        if p_remove < 0 or p_remove >= 1:\n            raise ValueError(\n                f'p_remove={p_remove} but p_remove must be <1 and >= 0.')\n        if p_remove > 0:\n            n_remain = (1 - p_remove) * len(df)\n            n_remain = int(np.ceil(n_remain))\n            df = df.sample(n=n_remain, replace=False, random_state=key[0])\n        return df\n\n\n    # def get_configuration_bound(self):\n    #     configuration_bound = {}\n    #     for k, v in self.configuration_space.items():\n    #         if type(v) is ConfigSpace.CategoricalHyperparameter:\n    #             configuration_bound[k] = [0, len(v.choices) - 1]\n    #         else:\n    #             configuration_bound[k] = [v.lower, v.upper]\n\n    #     return configuration_bound\n\n    def get_configuration_type(self):\n        configuration_type = {}\n        for k, v in self.configuration_space.items():\n            configuration_type[k] = type(v).__name__\n        return configuration_type\n\n    def get_configuration_space(\n            self, seed: Union[int, None] = None\n    ) :\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all parameters for\n        the XGBoost Model\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n        seed = seed if seed is not None else np.random.randint(1, 100000)\n        # cs = CS.ConfigurationSpace(seed=seed)\n        # variables = []\n\n        # for k,v in self.space_info['variables'].items():\n        #     lower = v['bounds'][0]\n        #     upper = v['bounds'][1]\n        #     if 'continuous' == v['type']:\n        #         variables.append(CS.UniformFloatHyperparameter(k, lower=lower, upper=upper))\n        #     elif 'integer' == v['type']:\n        #         variables.append(CS.UniformIntegerHyperparameter(k, lower=lower, upper=upper))\n        #     elif 'categorical' == v['type']:\n        #         variables.append(CS.UniformIntegerHyperparameter(k, lower=lower, upper=upper))\n        #     else:\n        #         raise ValueError('Unknown variable type')\n\n        # cs.add_hyperparameters(variables)\n        # return cs\n\n    def get_fidelity_space(\n            self, seed: Union[int, None] = None\n    ):\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all fidelity parameters for\n        the XGBoost Benchmark\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n        seed = seed if seed is not None else np.random.randint(1, 100000)\n        # fidel_space = CS.ConfigurationSpace(seed=seed)\n\n        # return fidel_space\n\n    def get_meta_information(self) -> Dict:\n        return {}\n\n    def get_budget(self) -> int:\n        \"\"\"Provides the function evaluations number about the benchmark.\n\n        Returns\n        -------\n        int\n            some human-readable information\n\n        \"\"\"\n        return self.budget\n\n    def get_name(self) -> str:\n        \"\"\"Provides the task name about the benchmark.\n\n        Returns\n        -------\n        str\n            some human-readable information\n\n        \"\"\"\n        return self.task_name\n\n    def get_type(self) -> str:\n        \"\"\"Provides the task type about the benchmark.\n\n        Returns\n        -------\n        str\n            some human-readable information\n\n        \"\"\"\n        return self.task_type\n\n    def get_input_dim(self) -> int:\n        \"\"\"Provides the input dimension about the benchmark.\n\n        Returns\n        -------\n        int\n            some human-readable information\n\n        \"\"\"\n        return self.input_dim\n\n    def get_objective_num(self) -> int:\n        return self.num_objective\n\n    def lock(self):\n        self.lock_flag = True\n\n    def unlock(self):\n        self.lock_flag = False\n\n    def get_lock_state(self) -> bool:\n        return self.lock_flag\n\n\n    def get_dataset_size(self):\n        raise NotImplementedError\n\n    def get_var_by_idx(self, idx):\n        raise NotImplementedError\n\n    def get_idx_by_var(self, vectors):\n        raise NotImplementedError\n\n    def get_unobserved_vars(self):\n        raise NotImplementedError\n\n    def get_unobserved_idxs(self):\n        raise NotImplementedError\n"
  },
  {
    "path": "transopt/benchmark/problem_base/transfer_problem.py",
    "content": "import abc\nimport logging\nimport numpy as np\nfrom typing import Union, Dict, List\n\nfrom transopt.benchmark.problem_base.non_tab_problem import NonTabularProblem\nfrom transopt.benchmark.problem_base.tab_problem import TabularProblem\nfrom transopt.remote import ExperimentClient\nfrom transopt.space.search_space import SearchSpace\nlogger = logging.getLogger(\"TransferProblem\")\n\n\nclass TransferProblem:\n    def __init__(self, seed: Union[int, np.random.RandomState, None] = None, **kwargs):\n        self.seed = seed\n        self.tasks = []\n        self.time = []\n        self.query_nums = []\n        self.__id = 0\n\n    def add_task_to_id(\n        self,\n        insert_id: int,\n        task: Union[\n            NonTabularProblem,\n            TabularProblem,\n        ],\n    ):\n        num_tasks = len(self.tasks)\n        assert insert_id < num_tasks + 1\n\n        self.tasks.insert(insert_id, task)\n        self.query_nums.insert(insert_id, 0)\n\n    def add_task(\n        self,\n        task: Union[\n            NonTabularProblem,\n            TabularProblem,\n        ],\n    ):\n        num_tasks = len(self.tasks)\n        insert_id = num_tasks\n        self.add_task_to_id(insert_id, task)\n\n    def del_task_by_id(self, del_id, name):\n        pass\n\n    def get_cur_id(self):\n        return self.__id\n\n    def get_tasks_num(self):\n        return len(self.tasks)\n\n    def get_unsolved_num(self):\n        return len(self.tasks) - self.__id\n\n    def get_rest_budget(self):\n        return self.get_cur_budget() - self.get_query_num()\n\n    def get_query_num(self):\n        return self.query_nums[self.__id]\n\n    def get_cur_budgettype(self):\n        return self.tasks[self.__id].get_budget_type()\n\n    def get_cur_budget(self):\n        return self.tasks[self.__id].get_budget()\n\n    def get_curname(self):\n        return self.tasks[self.__id].get_name()\n\n    def get_curdim(self):\n        return self.tasks[self.__id].get_input_dim()\n\n    def get_curobj_info(self):\n        return self.tasks[self.__id].get_objectives()\n    \n    def get_cur_fidelity_info(self) -> Dict:\n        return self.tasks[self.__id].fidelity_space.get_fidelity_range()\n\n    def get_cur_searchspace_info(self) -> Dict:\n        return self.tasks[self.__id].configuration_space.get_design_variables()\n    \n    \n    def get_cur_searchspace(self) -> SearchSpace:\n        return self.tasks[self.__id].configuration_space\n    \n\n    def get_curtask(self):\n        return self.tasks[self.__id]\n    \n    \n    def get_cur_seed(self):\n        return self.tasks[self.__id].seed\n\n    def get_cur_task_id(self):\n        return self.tasks[self.__id].task_id\n\n    def get_cur_workload(self):\n        return self.tasks[self.__id].workload\n\n\n    def sync_query_num(self, query_num: int):\n        self.query_nums[self.__id] = query_num\n\n    def roll(self):\n        self.__id += 1\n\n    def lock(self):\n        self.tasks[self.__id].lock()\n\n    def unlock(self):\n        self.tasks[self.__id].unlock()\n\n    def get_lockstate(self):\n        return self.tasks[self.__id].get_lock_state()\n\n    def get_task_type(self):\n        if isinstance(self.tasks[self.__id], TabularProblem):\n            return \"tabular\"\n        elif isinstance(self.tasks[self.__id], NonTabularProblem):\n            return \"non-tabular\"\n        else:\n            logger.error(\"Unknown task type.\")\n            raise NameError\n\n\n    ###Methods only for tabular data###\n    def get_dataset_size(self):\n        assert isinstance(self.tasks[self.__id], TabularProblem)\n        return self.tasks[self.__id].get_dataset_size()\n\n    def get_var_by_idx(self, idx):\n        assert isinstance(self.tasks[self.__id], TabularProblem)\n        return self.tasks[self.__id].get_var_by_idx(idx)\n\n    def get_idx_by_var(self, vectors):\n        assert isinstance(self.tasks[self.__id], TabularProblem)\n        return self.tasks[self.__id].get_idx_by_var(vectors)\n\n    def get_unobserved_vars(self):\n        assert isinstance(self.tasks[self.__id], TabularProblem)\n        return self.tasks[self.__id].get_unobserved_vars()\n\n    def get_unobserved_idxs(self):\n        assert isinstance(self.tasks[self.__id], TabularProblem)\n        return self.tasks[self.__id].get_unobserved_idxs()\n\n    def add_query_num(self):\n        if self.get_lockstate() == False:\n            self.query_nums[self.__id] += 1\n\n    def f(\n        self,\n        configuration: Union[\n            Dict,\n            List[Dict],\n        ],\n        fidelity: Union[\n            Dict,\n            None,\n            List[Dict],\n        ] = None,\n        **kwargs,\n    ):\n        if isinstance(configuration, list):\n            try:\n                if (\n                    self.get_query_num() + len(configuration) > self.get_cur_budget()\n                    and self.get_lockstate() == False\n                ):\n                    logger.error(\n                        \" The current function evaluation has exceeded the user-set budget.\"\n                    )\n                    raise RuntimeError(\"The current function evaluation has exceeded the user-set budget.\")\n            except RuntimeError as e:\n                return None\n\n            if isinstance(fidelity, list):\n                assert len(fidelity) == len(configuration)\n            elif fidelity is None:\n                fidelity = [None] * len(configuration)\n            else:\n                pass\n\n            results = []\n            for c_id, config in enumerate(configuration):\n                result = self.tasks[self.__id].f(config, fidelity[c_id])\n                self.add_query_num()\n\n                results.append(result)\n            return results\n        else:\n            if (\n                self.get_query_num() >= self.get_cur_budget()\n                and self.get_lockstate() == False\n            ):\n                logger.error(\n                    \" The current function evaluation has exceeded the user-set budget.\"\n                )\n                raise EnvironmentError\n\n            result = self.tasks[self.__id].f(configuration, fidelity)\n            self.add_query_num()\n            return result\n\n            # raise TypeError(f\"Unrecognized task type.\")\n\n\nclass RemoteTransferOptBenchmark(TransferProblem):\n    def __init__(\n        self, server_url, seed: Union[int, np.random.RandomState, None] = None, **kwargs\n    ):\n        super().__init__(seed=seed, **kwargs)\n        self.client = ExperimentClient(server_url)\n        self.task_params_list = []\n\n    def add_task_to_id(\n        self,\n        insert_id: int,\n        task: NonTabularProblem | TabularProblem,\n        task_params,\n    ):\n        assert insert_id < len(self.tasks) + 1\n\n        self.task_params_list.insert(insert_id, task_params)\n        self.tasks.insert(insert_id, task)\n        self.query_nums.insert(insert_id, 0)\n\n    def f(\n        self,\n        configuration: Union[\n            Dict,\n            List[Union[Dict]],\n        ],\n        fidelity: Union[\n            Dict,\n            None,\n            List[Union[Dict]],\n        ] = None,\n        idx: Union[int, None, List[int]] = None,\n        **kwargs,\n    ):\n        space = self.get_cur_searchspace()\n        bench_name = self.get_curname().split(\"_\")[0]\n        bench_params = self.task_params_list[self.get_curid()]\n\n        if not space or not bench_name or not bench_params:\n            raise ValueError(\"Missing or incorrect data for benchmark.\")\n\n        # Package data\n        data = self._package_data(\n            space, bench_name, bench_params, configuration, fidelity, idx, **kwargs\n        )\n\n        result = self._execute_experiment(data)\n\n        return result\n\n    def _package_data(\n        self, space, bench_name, bench_params, configuration, fidelity, idx, **kwargs\n    ):\n        return {\n            \"benchmark\": bench_name,\n            \"id\": space[\"task_id\"],\n            \"budget\": space[\"budget\"],\n            \"seed\": space[\"seed\"],\n            \"bench_params\": bench_params,\n            \"fitness_params\": {\n                \"configuration\": configuration,\n                \"fidelity\": fidelity,\n                \"idx\": idx,\n                **kwargs,\n            },\n        }\n\n    def _execute_experiment(self, data):\n        # Send data to server and get the result\n        task_id = self.client.start_experiment(data)\n\n        # Wait for the task to complete and get the result\n        return self.client.wait_for_result(task_id)\n"
  },
  {
    "path": "transopt/benchmark/synthetic/MovingPeakBenchmark.py",
    "content": "import logging\nimport numpy as np\nimport ConfigSpace as CS\nimport matplotlib.pyplot as plt\nfrom sklearn.preprocessing import normalize\nfrom typing import Union, Tuple, Dict, List\n\nfrom transopt.benchmark.problem_base import NonTabularProblem\nfrom agent.registry import benchmark_register\n\n\nlogger = logging.getLogger(\"MovingPeakBenchmark\")\n\n\nclass MovingPeakGenerator:\n    def __init__(\n        self,\n        n_var,\n        shift_length=3.0,\n        height_severity=7.0,\n        width_severity=1.0,\n        lam=0.5,\n        n_peak=4,\n        n_step=11,\n        seed=None,\n    ):\n        if seed is not None:\n            np.random.seed(seed)\n        self.n_var = n_var\n        self.shift_length = shift_length\n        self.height_severity = height_severity\n        self.width_severity = width_severity\n\n        # lambda determines whether there is a direction of the movement, or whether they are totally random.\n        # For lambda = 1.0 each move has the same direction, while for lambda = 0.0, each move has a random direction\n        self.lam = lam\n\n        # number of peaks in the landscape\n        self.n_peak = n_peak\n\n        self.var_bound = np.array([[0, 100]] * n_var)\n\n        self.height_bound = np.array([[30, 70]] * n_peak)\n\n        self.width_bound = np.array([[1.0, 12.0]] * n_peak)\n\n        self.n_step = n_step\n\n        self.t = 0\n\n        self.bounds = np.array(\n            [[-1.0] * self.n_var, [1.0] * self.n_var], dtype=np.float64\n        )\n\n        current_peak = np.random.random(size=(n_peak, n_var)) * np.tile(\n            self.var_bound[:, 1] - self.var_bound[:, 0], (n_peak, 1)\n        ) + np.tile(self.var_bound[:, 0], (n_peak, 1))\n\n        current_width = (\n            np.random.random(size=(n_peak,))\n            * (self.width_bound[:, 1] - self.width_bound[:, 0])\n            + self.width_bound[:, 0]\n        )\n\n        current_height = (\n            np.random.random(size=(n_peak,))\n            * (self.height_bound[:, 1] - self.height_bound[:, 0])\n            + self.height_bound[:, 0]\n        )\n\n        previous_shift = normalize(\n            np.random.random(size=(n_peak, n_var)), axis=1, norm=\"l2\"\n        )\n\n        self.peaks = []\n        self.widths = []\n        self.heights = []\n\n        self.peaks.append(current_peak)\n        self.widths.append(current_width)\n        self.heights.append(current_height)\n\n        for t in range(1, n_step):\n            peak_shift = self.cal_peak_shift(previous_shift)\n            width_shift = self.cal_width_shift()\n            height_shift = self.cal_height_shift()\n            current_peak = current_peak + peak_shift\n            current_height = current_height + height_shift.squeeze()\n            current_width = current_width + width_shift.squeeze()\n            for i in range(self.n_peak):\n                self._fix_bound(current_peak[i, :], self.var_bound)\n            self._fix_bound(current_width, self.width_bound)\n            self._fix_bound(current_height, self.height_bound)\n            previous_shift = peak_shift\n            self.peaks.append(current_peak)\n            self.widths.append(current_width)\n            self.heights.append(current_height)\n\n    def get_MPB(self):\n        return self.peaks, self.widths, self.heights\n\n    def cal_width_shift(self):\n        width_change = np.random.random(size=(self.n_peak, 1))\n        return self.width_severity * width_change\n\n    def cal_height_shift(self):\n        height_change = np.random.random(size=(self.n_peak, 1))\n        return self.height_severity * height_change\n\n    def cal_peak_shift(self, previous_shift):\n        peak_change = np.random.random(size=(self.n_peak, self.n_var))\n        return (1 - self.lam) * self.shift_length * normalize(\n            peak_change - 0.5, axis=1, norm=\"l2\"\n        ) + self.lam * previous_shift\n\n    def change(self):\n        if self.t < self.n_step - 1:\n            self.t += 1\n\n    def current_optimal(self, peak_shape=None):\n        current_peak = self.peaks[self.t]\n        current_height = self.heights[self.t]\n        optimal_x = np.atleast_2d(current_peak[np.argmax(current_height)])\n        optimal_y = self.f(optimal_x, peak_shape)\n        return optimal_x, optimal_y\n\n    def transfer(self, X):\n        return (X + 1) * (self.var_bound[:, 1] - self.var_bound[:, 0]) / 2 + (\n            self.var_bound[:, 0]\n        )\n\n    def normalize(self, X):\n        return (\n            2\n            * (X - (self.var_bound[:, 0]))\n            / (self.var_bound[:, 1] - self.var_bound[:, 0])\n            - 1\n        )\n\n    @property\n    def optimizers(self):\n        current_peak = self.peaks[self.t]\n        current_height = self.heights[self.t]\n        optimal_x = np.atleast_2d(current_peak[np.argmax(current_height)])\n        optimal_x = self.normalize(optimal_x)\n        return optimal_x\n\n    @staticmethod\n    def _fix_bound(data, bound):\n        for i in range(data.shape[0]):\n            if data[i] < bound[i, 0]:\n                data[i] = 2 * bound[i, 0] - data[i]\n            elif data[i] > bound[i, 1]:\n                data[i] = 2 * bound[i, 1] - data[i]\n            while data[i] < bound[i, 0] or data[i] > bound[i, 1]:\n                data[i] = data[i] * 0.5 + bound[i, 0] * 0.25 + bound[i, 1] * 0.25\n\n\n@benchmark_register(\"MPB\")\nclass MovingPeakBenchmark(NonTabularProblem):\n    def __init__(\n        self,\n        task_name,\n        budget,\n        peak,\n        height,\n        width,\n        seed,\n        input_dim,\n        task_type=\"non-tabular\",\n    ):\n        self.dimension = input_dim\n        self.peak = peak\n        self.height = height\n        self.width = width\n        self.n_peak = len(peak)\n        super(MovingPeakBenchmark, self).__init__(\n            task_name=task_name, seed=seed, task_type=task_type, budget=budget\n        )\n\n    def peak_function_cone(self, x):\n        distance = np.linalg.norm(np.tile(x, (self.n_peak, 1)) - self.peak, axis=1)\n        return np.max(self.height - self.width * distance)\n\n    def peak_function_sharp(self, x):\n        distance = np.linalg.norm(np.tile(x, (self.n_peak, 1)) - self.peak, axis=1)\n        return np.max(self.height / (1 + self.width * distance * distance))\n\n    def peak_function_hilly(self, x):\n        distance = np.linalg.norm(np.tile(x, (self.n_peak, 1)) - self.peak, axis=1)\n        return np.max(\n            self.height\n            - self.width * distance * distance\n            - 0.01 * np.sin(20.0 * distance * distance)\n        )\n\n    def objective_function(\n        self,\n        configuration: Union[CS.Configuration, Dict],\n        fidelity: Union[Dict, CS.Configuration, None] = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        if \"peak_shape\" not in kwargs:\n            peak_shape = \"cone\"\n        else:\n            peak_shape = kwargs[\"peak_shape\"]\n\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n        if peak_shape == \"cone\":\n            peak_function = self.peak_function_cone\n        elif peak_shape == \"sharp\":\n            peak_function = self.peak_function_sharp\n        elif peak_shape == \"hilly\":\n            peak_function = self.peak_function_hilly\n        else:\n            # print(\"Unknown shape, set to default\")\n            peak_function = self.peak_function_cone\n        y = peak_function(X)\n\n        return {\"function_value\": float(y), \"info\": {\"fidelity\": fidelity}}\n\n    def get_configuration_space(\n        self, seed: Union[int, None] = None\n    ) -> CS.ConfigurationSpace:\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all parameters for\n        the XGBoost Model\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n        seed = seed if seed is not None else np.random.randint(1, 100000)\n        cs = CS.ConfigurationSpace(seed=seed)\n        cs.add_hyperparameters(\n            [\n                CS.UniformFloatHyperparameter(f\"x{i}\", lower=0.0, upper=100.0)\n                for i in range(self.dimension)\n            ]\n        )\n\n        return cs\n\n    def get_fidelity_space(\n        self, seed: Union[int, None] = None\n    ) -> CS.ConfigurationSpace:\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all fidelity parameters for\n        the XGBoost Benchmark\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n        seed = seed if seed is not None else np.random.randint(1, 100000)\n        fidel_space = CS.ConfigurationSpace(seed=seed)\n\n        return fidel_space\n\n    def get_meta_information(self) -> Dict:\n        print(1)\n        return {}\n"
  },
  {
    "path": "transopt/benchmark/synthetic/MultiObjBenchmark.py",
    "content": "import os\nimport math\nimport logging\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport ConfigSpace as CS\nfrom typing import Union, Dict\nimport random\nfrom agent.registry import benchmark_register\nfrom transopt.benchmark.problem_base import NonTabularProblem\n\nlogger = logging.getLogger(\"MultiObjBenchmark\")\n\n\n@benchmark_register(\"AckleySphere\")\nclass AckleySphereOptBenchmark(NonTabularProblem):\n    def __init__(\n        self, task_name, budget, seed, workload = None, task_type=\"non-tabular\", **kwargs\n    ):\n\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n        self.workload = workload\n        rnd_instance = random.Random()\n        rnd_instance.seed(self.workload)\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.array([rnd_instance.random() for _ in range(self.input_dim)])[:, np.newaxis].T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift)\n        self.dtype = np.float64\n\n\n        super(AckleySphereOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            task_type=task_type,\n            budget=budget,\n            workload=workload,\n        )\n\n    def objective_function(\n        self,\n        configuration: Union[CS.Configuration, Dict],\n        fidelity: Union[Dict, CS.Configuration, None] = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        a = 20\n        b = 0.2\n        c = 2 * np.pi\n        f_1 = (\n            -a * np.exp(-b * np.sqrt(np.sum(X**2) / 2))\n            - np.exp(np.sum(np.cos(c * X)) / 2)\n            + a\n            + np.e\n        )\n        f_2 = np.sum(X**2)\n        return {\n            \"function_value_1\": float(f_1),\n            \"function_value_2\": float(f_2),\n            \"info\": {\"fidelity\": fidelity},\n        }\n\n    def get_configuration_space(\n        self, seed: Union[int, None] = None\n    ) -> CS.ConfigurationSpace:\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all parameters for\n        the XGBoost Model\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n        seed = seed if seed is not None else np.random.randint(1, 100000)\n        cs = CS.ConfigurationSpace(seed=seed)\n        cs.add_hyperparameters(\n            [\n                CS.UniformFloatHyperparameter(f\"x{i}\", lower=-5.12, upper=5.12)\n                for i in range(self.input_dim)\n            ]\n        )\n\n        return cs\n\n    def get_fidelity_space(\n        self, seed: Union[int, None] = None\n    ) -> CS.ConfigurationSpace:\n        \"\"\"\n        Creates a ConfigSpace.ConfigurationSpace containing all fidelity parameters for\n        the XGBoost Benchmark\n\n        Parameters\n        ----------\n        seed : int, None\n            Fixing the seed for the ConfigSpace.ConfigurationSpace\n\n        Returns\n        -------\n        ConfigSpace.ConfigurationSpace\n        \"\"\"\n        seed = seed if seed is not None else np.random.randint(1, 100000)\n        fidel_space = CS.ConfigurationSpace(seed=seed)\n\n        return fidel_space\n\n    def get_meta_information(self) -> Dict:\n        return {\"number_objective\": 2}\n"
  },
  {
    "path": "transopt/benchmark/synthetic/__init__.py",
    "content": "from transopt.benchmark.synthetic.synthetic_problems import (\n    # SphereOptBenchmark,\n    # RastriginOptBenchmark,\n    # SchwefelOptBenchmark,\n    # LevyROptBenchmark,\n    # GriewankOptBenchmark,\n    # RosenbrockOptBenchmark,\n    # DropwaveROptBenchmark,\n    # LangermannOptBenchmark,\n    # RotatedHyperEllipsoidOptBenchmark,\n    # SumOfDifferentPowersOptBenchmark,\n    # StyblinskiTangOptBenchmark,\n    # PowellOptBenchmark,\n    # DixonPriceOptBenchmark,\n    # cpOptBenchmark,\n    # mpbOptBenchmark,\n    Ackley,\n    # EllipsoidOptBenchmark,\n    # DiscusOptBenchmark,\n    # BentCigarOptBenchmark,\n    # SharpRidgeOptBenchmark,\n    # GriewankRosenbrockOptBenchmark,\n    # KatsuuraOptBenchmark,\n)"
  },
  {
    "path": "transopt/benchmark/synthetic/synthetic_problems.py",
    "content": "# %matplotlib notebook\n\nimport os\nimport math\nimport logging\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom typing import Union, Dict\nfrom transopt.space.variable import *\nfrom transopt.agent.registry import problem_registry\nfrom transopt.benchmark.problem_base.non_tab_problem import NonTabularProblem\nfrom transopt.space.search_space import SearchSpace\nfrom transopt.space.fidelity_space import FidelitySpace\nfrom matplotlib import gridspec\n\n\nlogger = logging.getLogger(\"SyntheticBenchmark\")\n\nclass SyntheticProblemBase(NonTabularProblem):\n    problem_type = \"synthetic\"\n    num_variables = []\n    num_objectives = 1\n    workloads = []\n    fidelity = None\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        super(SyntheticProblemBase, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def get_fidelity_space(self) -> FidelitySpace:\n        fs = FidelitySpace([])\n        return fs\n\n    def get_objectives(self) -> Dict:\n        return {'f1':'minimize'}\n\n    def get_problem_type(self):\n        return \"synthetic\"\n    \n\n\n@problem_registry.register(\"Sphere\")\nclass SphereOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift)\n        self.dtype = np.float64\n\n        super(SphereOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        y = np.sum((X) ** 2, axis=1)\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n    \n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-5.12, 5.12)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"Rastrigin\")\nclass RastriginOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift + 2.0)\n        self.dtype = np.float64\n\n        super(RastriginOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift - 0.4)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        pi = np.array([math.pi], dtype=self.dtype)\n        y = 10.0 * self.input_dim + np.sum((X) ** 2 - 10.0 * np.cos(pi * (X)), axis=1)\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-5.12, 5.12)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"Schwefel\")\nclass SchwefelOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(420.9687 - self.shift)\n        self.dtype = np.float64\n\n        super(SchwefelOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        y = 420 - np.sum(\n            np.multiply(X, np.sin(np.sqrt(abs(self.stretch * X - self.shift)))), axis=1\n        )\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-500.0, 500.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n\n@problem_registry.register(\"LevyR\")\nclass LevyROptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift - 1.0)\n        self.dtype = np.float64\n\n        super(LevyROptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift - 0.1)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        w = 1.0 + X / 4.0\n        pi = np.array([math.pi], dtype=self.dtype)\n        part1 = np.sin(pi * w[..., 0]) ** 2\n        part2 = np.sum(\n            (w[..., :-1] - 1.0) ** 2\n            * (1.0 + 5.0 * np.sin(math.pi * w[..., :-1] + 1.0) ** 2),\n            axis=1,\n        )\n        part3 = (w[..., -1] - 1.0) ** 2 * (1.0 + np.sin(2 * math.pi * w[..., -1]) ** 2)\n        y = part1 + part2 + part3\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-10.0, 10.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"Griewank\")\nclass GriewankOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift)\n        self.dtype = np.float64\n\n        super(GriewankOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        div = np.arange(start=1, stop=d + 1, dtype=self.dtype)\n        part1 = np.sum(X**2 / 4000.0, axis=1)\n        part2 = -np.prod(np.cos(X / np.sqrt(div)), axis=1)\n        y = part1 + part2 + 1.0\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-100.0, 100.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"Rosenbrock\")\nclass RosenbrockOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift)\n        self.dtype = np.float64\n\n        super(RosenbrockOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        y = np.sum(\n            100.0 * (X[..., 1:] - X[..., :-1] ** 2) ** 2 + (X[..., :-1] - 1) ** 2,\n            axis=-1,\n        )\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-5.0, 10.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"DropwaveR\")\nclass DropwaveROptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift + 3.3)\n        self.dtype = np.float64\n\n        self.a = np.array([20], dtype=self.dtype)\n        self.b = np.array([0.2], dtype=self.dtype)\n        self.c = np.array([2 * math.pi], dtype=self.dtype)\n\n        super(DropwaveROptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift - 0.33)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        part1 = np.linalg.norm(X, axis=1)\n        y = -(3 + np.cos(part1)) / (0.1 * np.power(part1, 1.5) + 1)\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-10.0, 10.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"Langermann\")\nclass LangermannOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift)\n        self.dtype = np.float64\n\n        self.c = np.array([1, 2, 5])\n        self.m = 3\n        self.A = np.random.randint(1, 10, (self.m, self.input_dim))\n\n        super(LangermannOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        y = 0\n        for i in range(self.m):\n            part1 = np.exp(-np.sum(np.power(X - self.A[i], 2), axis=1) / np.pi)\n            part2 = np.cos(np.sum(np.power(X - self.A[i], 2), axis=1) * np.pi)\n            y += part1 * part2 * self.c[i]\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (0.0, 10.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"RotatedHyperEllipsoid\")\nclass RotatedHyperEllipsoidOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift - 32.75)\n        self.dtype = np.float64\n\n        super(RotatedHyperEllipsoidOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift + 0.5)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        div = np.arange(start=d, stop=0, step=-1, dtype=self.dtype)\n        y = np.sum(div * X**2, axis=1)\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-65.536, 65.536)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"SumOfDifferentPowers\")\nclass SumOfDifferentPowersOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift + 0.238)\n        self.dtype = np.float64\n\n        super(SumOfDifferentPowersOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift - 0.238)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        y = np.zeros(shape=(n,), dtype=self.dtype)\n        for i in range(d):\n            y += np.abs(X[:, i]) ** (i + 1)\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-1.0, 1.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"StyblinskiTang\")\nclass StyblinskiTangOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift - 2.903534)\n        self.dtype = np.float64\n\n        super(StyblinskiTangOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        y = 0.5 * (X**4 - 16 * X**2 + 5 * X).sum(axis=1)\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-5.0, 5.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"Powell\")\nclass PowellOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = [tuple(0.0 for _ in range(self.input_dim))]\n        self.dtype = np.float64\n\n        super(PowellOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        y = np.zeros_like(X[..., 0])\n        for i in range(self.input_dim // 4):\n            i_ = i + 1\n            part1 = (X[..., 4 * i_ - 4] + 10.0 * X[..., 4 * i_ - 3]) ** 2\n            part2 = 5.0 * (X[..., 4 * i_ - 2] - X[..., 4 * i_ - 1]) ** 2\n            part3 = (X[..., 4 * i_ - 3] - 2.0 * X[..., 4 * i_ - 2]) ** 4\n            part4 = 10.0 * (X[..., 4 * i_ - 4] - X[..., 4 * i_ - 1]) ** 4\n            y += part1 + part2 + part3 + part4\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-4.0, 5.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"DixonPrice\")\nclass DixonPriceOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = [\n            tuple(\n                math.pow(2.0, -(1.0 - 2.0 ** (-(i - 1))))\n                for i in range(1, self.input_dim + 1)\n            )\n        ]\n        self.dtype = np.float64\n\n        super(DixonPriceOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        part1 = (X[..., 0] - 1) ** 2\n        i = np.arange(start=2, stop=d + 1, step=1)\n        i = np.tile(i, (n, 1))\n        part2 = np.sum(i * (2.0 * X[..., 1:] ** 2 - X[..., :-1]) ** 2, axis=1)\n        y = part1 + part2\n        # y +=  self.noise(n)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n    \n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-10.0, 10.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"cp\")\nclass cpOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = [\n            tuple(\n                math.pow(2.0, -(1.0 - 2.0 ** (-(i - 1))))\n                for i in range(1, self.input_dim + 1)\n            )\n        ]\n        self.dtype = np.float64\n\n        super(cpOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array(\n            [[configuration[k] for idx, k in enumerate(configuration.keys())]]\n        )[0]\n\n        part1 = np.sin(6 * X[0]) + X[1] ** 2\n        part2 = 0.1 * X[0] ** 2 + 0.1 * X[1] ** 2\n\n        if self.task_id == 1:\n            part3 = 0.1 * ((3) * (X[0] + 0.3)) ** 2 + 0.1 * ((3) * (X[1] + 0.3)) ** 2\n        else:\n            part3 = 0.1 * ((3) * (X[0] - 0.3)) ** 2 + 0.1 * ((3) * (X[1] - 0.3)) ** 2\n\n        y = part1 + part3 + part2\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-1.0, 1.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"mpb\")\nclass mpbOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = [\n            tuple(\n                math.pow(2.0, -(1.0 - 2.0 ** (-(i - 1))))\n                for i in range(1, self.input_dim + 1)\n            )\n        ]\n        self.dtype = np.float64\n\n        super(mpbOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array(\n            [[configuration[k] for idx, k in enumerate(configuration.keys())]]\n        )[0]\n\n        n_peak = 2\n        self.peak = np.ndarray([[-0.5, -0.5], [0.2, 0.2], []])\n\n        if self.task_id == 0:\n            distance = np.linalg.norm(np.tile(X, (n_peak, 1)) - self.peak[0], axis=1)\n        elif self.task_id == 1:\n            distance = np.linalg.norm(np.tile(X, (n_peak, 1)) - self.peak[0], axis=1)\n        else:\n            distance = np.linalg.norm(np.tile(X, (n_peak, 1)) - self.peak[0], axis=1)\n\n        y = np.max(self.height - self.width * distance)\n\n        return {\"f1\": float(y), \"info\": {\"fidelity\": fidelity}}\n\ndef get_configuration_space(self) -> SearchSpace:\n        \n        variables =  [Continuous(f'x{i}', (-32.768, 32.768)) for i in range(self.input_dim)]\n        \n        ss = SearchSpace(variables)\n\n        return ss\n\n\n@problem_registry.register(\"Ackley\")\nclass Ackley(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift - 12)\n        self.dtype = np.float64\n\n        self.a = np.array([20], dtype=self.dtype)\n        self.b = np.array([0.2], dtype=self.dtype)\n        self.c = np.array([0.3 * math.pi], dtype=self.dtype)\n\n        super(Ackley, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift - 0.73)\n\n        n = X.shape[0]\n        d = X.shape[1]\n        a, b, c = self.a, self.b, self.c\n\n        part1 = -a * np.exp(-b / math.sqrt(d) * np.linalg.norm(X, axis=-1))\n        part2 = -(np.exp(np.mean(np.cos(c * X), axis=-1)))\n        y = part1 + part2 + a + math.e\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-32.768, 32.768)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n    \n    \n@problem_registry.register(\"Ellipsoid\")\nclass EllipsoidOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift)\n        self.dtype = np.float64\n\n        self.condition = 1e6\n\n        super(EllipsoidOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        y = np.array([])\n        for x in X:\n            temp = x[0] * x[0]\n            for i in range(1, d):\n                temp += pow(self.condition, exponent) * x[i] * x[i]\n            y = np.append(y, temp)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-5.0, 5.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"Discus\")\nclass DiscusOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift)\n        self.dtype = np.float64\n\n        self.condition = 1e6\n\n        super(DiscusOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        y = np.array([])\n        for x in X:\n            temp = self.condition * x[0] * x[0]\n            for i in range(1, d):\n                temp += x[i] * x[i]\n            y = np.append(y, temp)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-5.0, 5.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"BentCigar\")\nclass BentCigarOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift)\n        self.dtype = np.float64\n\n        self.condition = 1e6\n\n        super(BentCigarOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        y = np.array([])\n        for x in X:\n            temp = x[0] * x[0]\n            for i in range(1, d):\n                temp += self.condition * x[i] * x[i]\n            y = np.append(y, temp)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-5.0, 5.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"SharpRidge\")\nclass SharpRidgeOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift)\n        self.dtype = np.float64\n\n        self.alpha = 100.0\n\n        super(SharpRidgeOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        d_vars_40 = d / 40.0\n        vars_40 = int(math.ceil(d_vars_40))\n        y = np.array([])\n        for x in X:\n            temp = 0\n            for i in range(vars_40, d):\n                temp += x[i] * x[i]\n            temp = self.alpha * math.sqrt(temp / d_vars_40)\n            for i in range(vars_40):\n                temp += x[i] * x[i] / d_vars_40\n            y = np.append(y, temp)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-5.0, 5.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"GriewankRosenbrock\")\nclass GriewankRosenbrockOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift)\n        self.dtype = np.float64\n\n        super(GriewankRosenbrockOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        y = np.array([])\n        for x in X:\n            temp = 0\n            for i in range(len(x) - 1):\n                temp1 = x[i] * x[i] - x[i + 1]\n                temp2 = 1.0 - x[i]\n                temp3 = 100.0 * temp1**2 + temp2**2\n                temp += temp3 / 4000.0 - math.cos(temp3)\n            y = np.append(y, temp)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-5.0, 5.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\n@problem_registry.register(\"Katsuura\")\nclass KatsuuraOptBenchmark(SyntheticProblemBase):\n    def __init__(\n        self, task_name, budget_type, budget, seed, workload, **kwargs\n    ):\n        assert \"params\" in kwargs\n        parameters = kwargs[\"params\"]\n        self.input_dim = parameters[\"input_dim\"]\n\n        if \"shift\" in parameters:\n            self.shift = parameters[\"shift\"]\n        else:\n            shift = np.random.random(size=(self.input_dim, 1)).T\n            self.shift = (shift * 2 - 1) * 0.02\n\n        if \"stretch\" in parameters:\n            self.stretch = parameters[\"stretch\"]\n        else:\n            self.stretch = np.array([1] * self.input_dim, dtype=np.float64)\n\n        self.optimizers = tuple(self.shift)\n        self.dtype = np.float64\n\n        super(KatsuuraOptBenchmark, self).__init__(\n            task_name=task_name,\n            seed=seed,\n            workload=workload,\n            budget_type=budget_type,\n            budget=budget,\n        )\n\n    def objective_function(\n        self,\n        configuration: Dict,\n        fidelity: Dict = None,\n        seed: Union[np.random.RandomState, int, None] = None,\n        **kwargs,\n    ) -> Dict:\n        X = np.array([[configuration[k] for idx, k in enumerate(configuration.keys())]])\n\n        X = self.stretch * (X - self.shift)\n\n        n = X.shape[0]\n        d = X.shape[1]\n\n        y = np.array([])\n        for x in X:\n            result = 1.0\n            for i in range(len(x)):\n                temp = 0.0\n                for j in range(1, 33):\n                    temp1 = 2.0**j\n                    temp += abs(temp1 * x[i] - round(temp1 * x[i])) / temp1\n                temp = 1.0 + (i + 1) * temp\n                result *= temp ** (10.0 / (len(x) ** 1.2))\n            y = np.append(y, result)\n\n        results = {list(self.objective_info.keys())[0]: float(y)}\n        for fd_name in self.fidelity_space.fidelity_names:\n            results[fd_name] = fidelity[fd_name] \n        return results\n\n\n    def get_configuration_space(self) -> SearchSpace:\n        variables =  [Continuous(f'x{i}', (-5.0, 5.0)) for i in range(self.input_dim)]\n        ss = SearchSpace(variables)\n        return ss\n\n\ndef visualize_function(func_name, n_points=100):\n    \"\"\"Visualize synthetic benchmark functions in 1D and 2D.\n    \n    Args:\n        func_name (str): Name of the benchmark function\n        n_points (int): Number of points for visualization\n    \"\"\"\n    import matplotlib.pyplot as plt\n    from mpl_toolkits.mplot3d import Axes3D\n\n    # Create benchmark instance\n    params = {\"input_dim\": 2}  # We'll use 2D for visualization\n    benchmark = problem_registry.get(func_name)(\n        task_name=\"visualization\",\n        budget_type=\"time\",\n        budget=100,\n        seed=42,\n        workload=None,\n        params=params\n    )\n\n    # Create figure\n    fig = plt.figure(figsize=(15, 5))\n    \n    # 1D Plot - 使用set_position调整位置和大小\n    # [left, bottom, width, height]\n    ax1 = fig.add_subplot(121)\n    ax1.set_position([0.05, 0.15, 0.35, 0.7])  # 调整左图的位置和大小\n    \n    x = np.linspace(-5, 5, n_points)\n    y = []\n    for xi in x:\n        config = {\"x0\": xi, \"x1\": 0.0}\n        result = benchmark.objective_function(config)\n        y.append(result[list(result.keys())[0]])\n    \n    ax1.plot(x, y, 'b-', linewidth=2)\n    ax1.set_title(f'{func_name} Function (1D)')\n    ax1.set_xlabel('x')\n    ax1.set_ylabel('f(x)')\n    ax1.grid(True)\n\n    # 2D Plot - 使用set_position调整位置和大小\n    ax2 = fig.add_subplot(122, projection='3d')\n    ax2.set_position([0.5, 0.1, 0.45, 0.8])  # 调整右图的位置和大小\n    \n    x = np.linspace(-5, 5, n_points)\n    y = np.linspace(-5, 5, n_points)\n    X, Y = np.meshgrid(x, y)\n    Z = np.zeros_like(X)\n    \n    for i in range(n_points):\n        for j in range(n_points):\n            config = {\"x0\": X[i,j], \"x1\": Y[i,j]}\n            result = benchmark.objective_function(config)\n            Z[i,j] = result[list(result.keys())[0]]\n    \n    surf = ax2.plot_surface(X, Y, Z, cmap='viridis', \n                          linewidth=0, antialiased=True)\n    fig.colorbar(surf, ax=ax2, shrink=0.5, aspect=5)\n    \n    ax2.set_title(f'{func_name} Function (2D)')\n    ax2.set_xlabel('x')\n    ax2.set_ylabel('y')\n    ax2.set_zlabel('f(x,y)')\n    \n    plt.savefig(f'{func_name}.png', bbox_inches='tight', dpi=300)\n    plt.close()\n\n# Example usage:\nif __name__ == \"__main__\":\n    # Test visualization with some benchmark functions\n    functions = [\"Sphere\", \"Rastrigin\", \"Ackley\"]\n    for func in functions:\n        visualize_function(func)\n"
  },
  {
    "path": "transopt/datamanager/__init__.py",
    "content": ""
  },
  {
    "path": "transopt/datamanager/database.py",
    "content": "import atexit\nimport json\nimport queue\nimport sqlite3\nimport time\nfrom multiprocessing import Event, Manager, Process, Queue\nfrom typing import Union\n\nimport numpy as np\nimport pandas as pd\n\nfrom transopt.utils.log import logger\nfrom transopt.utils.path import get_library_path\n\n\"\"\"\nDescriptions of the reserved database tables.\n\"\"\"\ntable_descriptions = {\n    \"_config\": \"\"\"\n        name varchar(200) not null,\n        config text not null,\n        is_experiment boolean not null default TRUE\n        \"\"\",\n    \"_metadata\": \"\"\"\n        table_name varchar(255) not null,\n        problem_name varchar(255) not null,\n        dimensions int,\n        objectives int,\n        fidelities text,\n        workloads int,\n        budget_type varchar(50),\n        budget int,\n        seeds int,\n        space_refiner varchar(50),\n        sampler varchar(50),\n        pretrain varchar(50),\n        model varchar(50),\n        acf varchar(50),\n        normalizer varchar(50),\n        dataset_selectors json,\n        PRIMARY KEY (table_name)\n    \"\"\",\n}\n\n\nclass DatabaseDaemon:\n    def __init__(self, data_path, task_queue, result_queue, stop_event):\n        self.data_path = data_path\n        self.task_queue = task_queue\n        self.result_queue = result_queue\n        self.stop_event = stop_event\n\n    def run(self):\n        with sqlite3.connect(self.data_path) as conn:\n            cursor = conn.cursor()\n            while not self.stop_event.is_set():\n                task = self.task_queue.get()  # Check every second\n                if task is None:  # Sentinel for stopping\n                    break\n                func, args, commit = task\n                try:\n                    result = func(cursor, *args)\n                    if commit:\n                        conn.commit()\n                    self.result_queue.put((\"SUCCESS\", result))\n                except Exception as e:\n                    conn.rollback()\n                    logger.error(\n                        f\"Database operation failed: {e}\", exc_info=True\n                    )\n                    self.result_queue.put((\"FAILURE\", e))\n\n\nclass Database:\n    def __init__(self, db_file_name=\"database.db\"):\n        self.data_path = get_library_path() / db_file_name\n\n        manager = Manager()\n        self.task_queue = manager.Queue()\n        self.result_queue = manager.Queue()\n        self.lock = manager.Lock()\n        self.transaction_lock = manager.Lock()\n        self.stop_event = manager.Event()\n\n        self.process = Process(\n            target=DatabaseDaemon(\n                self.data_path, self.task_queue, self.result_queue, self.stop_event\n            ).run\n        )\n        self.process.start()\n        atexit.register(self.close)\n\n        # reserved tables\n        self.reserved_tables = list(table_descriptions.keys())\n        for name, desc in table_descriptions.items():\n            if not self.check_table_exist(name):\n                self.execute(f'CREATE TABLE \"{name}\" ({desc})')\n\n    def close(self):\n        self.stop_event.set()\n        self.task_queue.put(None)\n        self.process.join()\n\n    def _execute(self, task, args=(), timeout=None, commit=True):\n        self.task_queue.put((task, args, commit))\n        try:\n            status, result = self.result_queue.get(timeout=timeout)\n            if status == \"SUCCESS\":\n                return result\n            else:\n                raise result  # Re-raise the exception from the daemon\n        except queue.Empty:\n            raise Exception(\"Task execution timed out or failed\")\n\n    @staticmethod\n    def query_exec(cursor, query, params, fetchone, fetchall, many):\n        if many:\n            cursor.executemany(query, params or [])\n        else:\n            cursor.execute(query, params or ())\n        if fetchone:\n            return cursor.fetchone()\n        if fetchall:\n            return cursor.fetchall()\n        return None\n\n    def execute(\n        self,\n        query,\n        params=None,\n        fetchone=False,\n        fetchall=False,\n        timeout=None,\n        commit=True,\n    ):\n        with self.lock:\n            return self._execute(\n                Database.query_exec,\n                (query, params, fetchone, fetchall, False),\n                timeout,\n                commit\n            )\n\n    def executemany(\n        self,\n        query,\n        params=None,\n        fetchone=False,\n        fetchall=False,\n        timeout=None,\n        commit=True,\n    ):\n        with self.lock:\n            return self._execute(\n                Database.query_exec,\n                (query, params, fetchone, fetchall, True),\n                timeout,\n                commit\n            )\n\n    def start_transaction(self):\n        self.execute(\"BEGIN\", commit=False)\n\n    def commit_transaction(self):\n        self.execute(\"COMMIT\", commit=False)\n\n    def rollback_transaction(self):\n        self.execute(\"ROLLBACK\", commit=False)\n        \n    \"\"\" \n    table\n    \"\"\"\n\n    def get_experiment_datasets(self):\n        \"\"\"Get the list of all tables that are marked as experiment datasets.\"\"\"\n        experiment_datasets = self.execute(\n            \"SELECT name FROM _config WHERE is_experiment = TRUE\", fetchall=True\n        )\n        return [\n            table[0]\n            for table in experiment_datasets\n            if table[0] not in self.reserved_tables\n        ]\n\n    def get_all_datasets(self):\n        \"\"\"Get the list of all tables and indicate which ones are experiment datasets.\"\"\"\n        all_datasets = self.execute(\n            \"\"\"SELECT name, is_experiment FROM _config\"\"\", fetchall=True\n        )\n        return [\n            table[0] for table in all_datasets if table[0] not in self.reserved_tables\n        ]\n\n    def get_table_list(self):\n        \"\"\"Get the list of all database tables.\"\"\"\n        table_list = self.execute(\n            \"SELECT name FROM sqlite_master WHERE type='table'\", fetchall=True\n        )\n        return [\n            table[0] for table in table_list if table[0] not in self.reserved_tables\n        ]\n\n    def check_table_exist(self, name):\n        \"\"\"Check if a certain database table exists.\"\"\"\n        table_exists = self.execute(\n            \"SELECT name FROM sqlite_master WHERE type='table' AND name=?\",\n            params=(name,),\n            fetchone=True,\n        )\n        return table_exists is not None\n\n    def create_table(self, name, dataset_cfg, overwrite=False, is_experiment=True):\n        \"\"\"\n        Create and initialize a database table based on problem configuration.\n\n        Parameters\n        ----------\n        name: str\n            Name of the table to create and initialize.\n        dataset_cfg: dict\n            Configuration for the table schema.\n        overwrite : bool, optional\n            Flag to determine whether to overwrite the existing table, default is False.\n        is_experiment : bool, optional\n            Flag to denote if the table is for experimental use, default is True.\n        \"\"\"\n        if self.check_table_exist(name):\n            if overwrite:\n                self.remove_table(name)\n            else:\n                raise Exception(f\"Table {name} already exists\")\n\n        variables = dataset_cfg.get(\"variables\", [])\n        objectives = dataset_cfg.get(\"objectives\", [])\n        fidelities = dataset_cfg.get(\"fidelities\", [])\n\n        var_type_map = {\n            \"continuous\": \"float\",\n            \"log_continuous\": \"float\",\n            \"integer\": \"int\",\n            \"large_integer\": \"text\",  # Store large integers as text to handle very large values\n            \"int_exponent\": \"int\",\n            \"exp2\": \"int\",\n            \"categorical\": \"varchar(50)\",\n            # 'binary': 'boolean',\n        }\n\n        # description = ['status varchar(20) not null default \"unevaluated\"']\n        description = []\n\n        for var_info in variables:\n            description.append(\n                f'\"{var_info[\"name\"]}\" {var_type_map[var_info[\"type\"]]} not null'\n            )\n\n        for obj_info in objectives:\n            description.append(f'\"{obj_info[\"name\"]}\" float')\n\n        for fid_info in fidelities:\n            description.append(\n                f'\"{fid_info[\"name\"]}\" {var_type_map[fid_info[\"type\"]]} not null'\n            )\n\n        description += [\n            \"batch int default -1\",\n            \"error boolean default 0\",\n            # \"pareto boolean\",\n            # \"batch int not null\",\n            # \"order int default -1\",\n            # \"hypervolume float\",\n        ]\n\n        with self.transaction_lock:\n            try:\n                self.start_transaction()\n            \n                # Create the table\n                self.execute(f'CREATE TABLE \"{name}\" ({\",\".join(description)})', commit=False)\n\n                # Optionally, create indexes on certain columns\n                index_columns = [var[\"name\"] for var in variables] + [\n                    fid[\"name\"] for fid in fidelities if fid.get(\"index\", False)\n                ]\n                if index_columns:\n                    index_statement = \", \".join([f'\"{col}\"' for col in index_columns])\n                    self.execute(f'CREATE INDEX \"idx_{name}\" ON \"{name}\" ({index_statement})', commit=False)\n\n                self.create_or_update_config(name, dataset_cfg, is_experiment, commit=False)\n                if \"additional_config\" in dataset_cfg:\n                    self.create_or_update_metadata(name, dataset_cfg[\"additional_config\"], commit=False)\n            \n                self.commit_transaction()\n            except Exception as e:\n                self.rollback_transaction()  # Rollback if an error occurred\n                raise e\n\n    def remove_table(self, name):\n        if not self.check_table_exist(name):\n            raise Exception(f\"Table {name} does not exist\")\n        with self.transaction_lock:\n            try:\n                self.start_transaction()\n                self.execute(f\"DELETE FROM _config WHERE name = '{name}'\", commit=False)\n                self.execute(f\"DELETE FROM _metadata WHERE table_name = '{name}'\", commit=False)\n                self.execute(f'DROP TABLE IF EXISTS \"{name}\"', commit=False)\n                self.commit_transaction()\n            except Exception as e:\n                self.rollback_transaction()\n                raise e\n\n    \"\"\"\n    config\n    \"\"\"\n\n    def create_or_update_config(self, name, dataset_cfg, is_experiment=True, commit=True):\n        \"\"\"\n        Create or update a configuration entry in the _config table for a given table.\n        \"\"\"\n        # Serialize dataset_cfg into JSON format\n        config_json = json.dumps(dataset_cfg)\n\n        # Check if the configuration already exists\n        if self.query_config(name) is not None:\n            # Update the existing configuration\n            self.execute(\n                \"UPDATE _config SET config = ?, is_experiment = ? WHERE name = ?\",\n                (config_json, is_experiment, name),\n                commit=commit\n            )\n        else:\n            # Insert a new configuration\n            self.execute(\n                \"INSERT INTO _config (name, config, is_experiment) VALUES (?, ?, ?)\",\n                (name, config_json, is_experiment),\n                commit=commit\n            )\n\n    def query_config(self, name):\n        config_json = self.execute(\n            \"SELECT config FROM _config WHERE name=?\", params=(name,), fetchone=True\n        )\n\n        if config_json is None:\n            return None\n        else:\n            return json.loads(config_json[0])\n\n    def query_dataset_info(self, name):\n        \"\"\"\n        Query the dataset information of a given table.\n        \"\"\"\n        config = self.query_config(name)\n\n        if config is None:\n            return None\n\n        variables = config[\"variables\"]\n        objectives = config[\"objectives\"]\n        fidelities = config[\"fidelities\"]\n\n        num_rows = self.get_num_row(name)\n\n        dataset_info = {\n            \"num_variables\": len(variables),\n            \"num_objectives\": len(objectives),\n            \"num_fidelities\": len(fidelities),\n            \"data_number\": num_rows,\n            **config,\n        }\n        return dataset_info\n\n    def create_or_update_metadata(self, table_name, metadata, commit=True):\n        \"\"\"\n        Create or update a metadata entry in the _metadata table for a given table.\n        \"\"\"\n        dataset_selectors_json = json.dumps(metadata.get(\"DatasetSelectors\", {}))\n        problem_name = metadata.get(\"problem_name\", \"\")\n\n        self.execute(\n            f\"\"\"\n            INSERT INTO _metadata (\n                table_name, problem_name, dimensions, objectives, fidelities, workloads, budget_type, budget, seeds,\n                space_refiner, sampler, pretrain, model, acf, normalizer, dataset_selectors\n            ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\n            ON CONFLICT (table_name) DO UPDATE SET\n                problem_name = EXCLUDED.problem_name, dimensions = EXCLUDED.dimensions, objectives = EXCLUDED.objectives, \n                fidelities = EXCLUDED.fidelities, workloads = EXCLUDED.workloads, budget_type = EXCLUDED.budget_type,\n                budget = EXCLUDED.budget, seeds = EXCLUDED.seeds, space_refiner = EXCLUDED.space_refiner,\n                sampler = EXCLUDED.sampler, pretrain = EXCLUDED.pretrain, model = EXCLUDED.model, \n                acf = EXCLUDED.acf, normalizer = EXCLUDED.normalizer, dataset_selectors = EXCLUDED.dataset_selectors\n            \"\"\",\n            (\n                table_name,\n                problem_name,\n                metadata.get(\"dim\", 0),\n                metadata.get(\"obj\", 0),\n                metadata.get(\"fidelity\", \"\"),\n                metadata.get(\"workloads\", 0),\n                metadata.get(\"budget_type\", \"\"),\n                metadata.get(\"budget\", 0),\n                metadata.get(\"seeds\", 0),\n                metadata.get(\"SpaceRefiner\", \"\"),\n                metadata.get(\"Sampler\", \"\"),\n                metadata.get(\"Pretrain\", \"\"),\n                metadata.get(\"Model\", \"\"),\n                metadata.get(\"ACF\", \"\"),\n                metadata.get(\"Normalizer\", \"\"),\n                dataset_selectors_json,\n            ),\n            commit=commit\n        )\n\n    def get_all_metadata(self):\n        \"\"\"\n        Get the metadata for all tables in the database.\n        \"\"\"\n        metadata = self.execute(\"SELECT * FROM _metadata\", fetchall=True)\n        return metadata\n\n    def search_tables_by_metadata(self, search_params):\n        \"\"\"\n        Search for tables based on metadata criteria.\n\n        Parameters:\n        ----------\n        search_params : dict\n            A dictionary where keys are metadata column names and values are the criteria values.\n\n        Returns:\n        -------\n        list of str\n            A list of table names that match the search criteria.\n        \"\"\"\n        if not search_params:\n            raise ValueError(\"Search parameters are required\")\n\n        # Constructing the WHERE clause dynamically based on the provided search parameters\n        where_clause = self._get_conditions(conditions=search_params)\n\n        query = f\"SELECT table_name FROM _metadata{where_clause}\"\n        result = self.execute(query, fetchall=True)\n\n        return [row[0] for row in result]\n\n    \"\"\"\n    basic operations\n    \"\"\"\n\n    def insert_data(\n        self, table, data: Union[dict, list, pd.DataFrame, np.ndarray]\n    ) -> list:\n        \"\"\"\n        Insert single-row or multiple-row data into the database.\n\n        Parameters\n        ----------\n        table: str\n            Name of the database table to insert into.\n        data: dict, list, pd.DataFrame, or np.ndarray\n            Data to insert. If a dictionary, it represents a single row of data\n            where keys are column names and values are data values. If a list,\n            each element represents a row (as a list or dict). If a DataFrame or\n            np.ndarray, each row represents a row to be inserted.\n\n        Returns\n        -------\n        list\n            List of row numbers of the inse\n\n        \"\"\"\n        if isinstance(data, dict):\n            # Single row insertion from dict\n            columns = list(data.keys())\n            values = [list(data.values())]\n        elif isinstance(data, list):\n            # Multiple row insertion from list of dicts or lists\n            if all(isinstance(row, dict) for row in data):\n                columns = list(data[0].keys())\n                values = [list(row.values()) for row in data]\n            elif all(isinstance(row, list) for row in data):\n                columns = None\n                values = data\n            else:\n                raise ValueError(\n                    \"All rows in data_list must be of the same type (all dicts or all lists)\"\n                )\n        elif isinstance(data, (pd.DataFrame, np.ndarray)):\n            # Convert DataFrame or ndarray to list of lists for insertion\n            values = (\n                data.tolist() if isinstance(data, np.ndarray) else data.values.tolist()\n            )\n            columns = data.columns.tolist() if isinstance(data, pd.DataFrame) else None\n        else:\n            raise ValueError(\n                \"Data parameter must be a dictionary, list, pandas DataFrame, or numpy ndarray\"\n            )\n\n        if columns:\n            column_str = \",\".join([f'\"{col}\"' for col in columns])\n            value_placeholders = \",\".join([\"?\"] * len(columns))\n        else:\n            column_str = \"\"\n            value_placeholders = \",\".join([\"?\"] * len(values[0]))\n\n        query = f'INSERT INTO \"{table}\" ({column_str}) VALUES ({value_placeholders})'\n        self.executemany(query, values)\n\n        # Get the rowids of the inserted rows\n        n_row = self.get_num_row(table)\n        len_data = len(data) if isinstance(data, list) else len(values)\n        return list(range(n_row - len_data + 1, n_row + 1))\n\n    def _get_conditions(self, rowid=None, conditions=None):\n        \"\"\"\n        Construct SQL conditions for a query based on rowid and additional conditions.\n\n        Parameters\n        ----------\n        rowid: int/list\n            Row number(s) of the table to query (if None then no rowid condition is added).\n        conditions: dict\n            Additional conditions for querying (key: column name, value: column value).\n\n        Returns\n        -------\n        str\n            SQL condition string.\n        \"\"\"\n        from collections.abc import Iterable\n\n        conditions_list = []\n\n        # Handling rowid conditions\n        if rowid is not None:\n            if isinstance(rowid, Iterable) and not isinstance(rowid, str):\n                rowid_condition = f'rowid IN ({\",\".join([str(r) for r in rowid])})'\n                conditions_list.append(rowid_condition)\n            else:\n                conditions_list.append(f\"rowid = {rowid}\")\n\n        # Handling additional conditions\n        if conditions:\n            for column, value in conditions.items():\n                if isinstance(value, str):\n                    value_str = f\"'{value}'\"  # Strings need to be quoted\n                else:\n                    value_str = str(value)\n                condition_str = f'\"{column}\" = {value_str}'\n                conditions_list.append(condition_str)\n\n        # Combine all conditions with 'AND'\n        if conditions_list:\n            return \" WHERE \" + \" AND \".join(conditions_list)\n        else:\n            return \"\"\n\n    def update_data(self, table, data, rowid=None, conditions=None):\n        \"\"\"\n        Update single-row or multiple-row data in the database.\n\n        Parameters\n        ----------\n        table: str\n            Name of the database table to update.\n        data: dict or list of dicts\n            Data to update. If a dictionary, it represents a single row of data\n            where keys are column names and values are data values.\n            If a list, each dictionary in the list represents a row to be updated.\n        rowid: int/list\n            Row number(s) of the table to update. If None, conditions are used.\n        conditions: dict\n            Additional conditions for updating (key: column name, value: column value).\n        \"\"\"\n        if isinstance(data, dict):\n            data = [data]\n\n        update_values = []\n        for row in data:\n            columns = list(row.keys())\n            values = list(row.values())\n            set_clause = \", \".join([f'\"{col}\" = ?' for col in columns])\n            query = f'UPDATE \"{table}\" SET {set_clause}'\n\n            if rowid:\n                query += f\" WHERE rowid = ?\"\n                values.append(rowid)\n            elif conditions:\n                condition_str = \" AND \".join([f'\"{k}\" = ?' for k in conditions.keys()])\n                query += f\" WHERE {condition_str}\"\n                values.extend(conditions.values())\n            else:\n                raise ValueError(\"Either rowid or conditions must be provided\")\n\n            update_values.append(values)\n\n        self.executemany(query, update_values)\n\n    def delete_data(self, table, rowid=None, conditions=None):\n        \"\"\"\n        Delete single-row or multiple-row data in the database.\n\n        Parameters\n        ----------\n        table: str\n            Name of the database table to delete from.\n        rowid: int/list\n            Row number(s) of the table to delete. If None, conditions are used.\n        conditions: dict\n            Additional conditions for deleting (key: column name, value: column value).\n        \"\"\"\n        query = f'DELETE FROM \"{table}\"'\n        condition = self._get_conditions(rowid=rowid, conditions=conditions)\n        query += condition\n\n        self.execute(query)\n\n    def select_data(\n        self, table, columns=None, rowid=None, conditions=None, as_dataframe=False\n    ) -> Union[list, pd.DataFrame]:\n        \"\"\"\n        Select data in the database.\n\n        Parameters\n        ----------\n        table: str\n            Name of the database table to query.\n        column: str/list\n            Column name(s) of the table to query (if None then select all columns).\n        rowid: int/list\n            Row number(s) of the table to query (if None then select all rows).\n        conditions: dict\n            Additional conditions for querying (key: column name, value: column value).\n        as_dataframe: bool\n            If True, return the result as a pandas DataFrame.\n\n        Returns\n        -------\n        list of dicts\n            Selected data, each row as a dictionary with column names as keys.\n        \"\"\"\n        if columns is None:\n            query = f'SELECT * FROM \"{table}\"'\n            columns = self.get_column_names(table)\n        elif isinstance(columns, str):\n            query = f'SELECT \"{columns}\" FROM \"{table}\"'\n            columns = [columns]\n        else:\n            column_str = \",\".join([f'\"{col}\"' for col in columns])\n            query = f'SELECT {column_str} FROM \"{table}\"'\n\n        condition = self._get_conditions(rowid=rowid, conditions=conditions)\n        query += condition\n\n        # Convert each tuple in the results to a list\n        results = self.execute(query, fetchall=True)\n        if as_dataframe:\n            return pd.DataFrame(results, columns=columns)\n        else:\n            # Convert large integer values from string to int if needed\n            converted_results = []\n            for row in results:\n                row_dict = dict(zip(columns, row))\n                for key, value in row_dict.items():\n                    if isinstance(value, str) and value.isdigit() and key in self.large_integer_columns:\n                        row_dict[key] = int(value)\n                converted_results.append(row_dict)\n            return converted_results\n\n    def get_num_row(self, table):\n        query = f'SELECT COUNT(*) FROM \"{table}\"'\n        return self.execute(query, fetchone=True)[0]\n\n    def get_column_names(self, table):\n        \"\"\"Get the column names of a database table.\"\"\"\n        query = f'PRAGMA table_info(\"{table}\")'\n        return [col[1] for col in self.execute(query, fetchall=True)]\n"
  },
  {
    "path": "transopt/datamanager/lsh.py",
    "content": "import numpy as np\nfrom collections import defaultdict\n\nfrom transopt.datamanager.minhash import MinHasher\n\nclass LSHCache:\n    def __init__(self, hasher, num_bands=10):\n        \"\"\"\n        Initialize the LSH object with the specified number of bands and rows per band.\n\n        Parameters:\n        -----------\n        hasher: MinHasher\n            An object that computes minhashes for a given input text.\n\n        num_bands: int\n            The number of bands to divide the minhash signature matrix into.\n        \"\"\"\n        assert (\n            hasher.num_hashes % num_bands == 0\n        ), \"num_hashes must be divisible by num_bands\"\n        \n        self.buckets = [defaultdict(set) for _ in range(num_bands)]\n        self.hasher = hasher\n        \n        self.band_width = hasher.num_hashes // num_bands\n        self.num_bands = num_bands\n        \n        self.fingerprints = {}\n\n    def add(self, key, vector):\n        \"\"\"\n        Add a multidimensional vector to the cache.\n\n        Parameters:\n        -----------\n        key: any hashable\n            A unique identifier for the vector.\n\n        vector: tuple (str, str, int, int)\n            A tuple representing the multidimensional vector. The tuple format is:\n            (task_name, variable_names, num_variables, num_objectives)\n\n        \"\"\"\n        if vector is None:\n            return\n        # Compute a combined fingerprint for the string dimensions\n        combined_fp = []\n        for dimension in vector[:2]:  # Only take the first two string dimensions\n            combined_fp.extend(self.hasher.fingerprint(dimension))\n\n        # Incorporate the integer dimensions by modifying the bucket key\n        num_variables = vector[2]\n        num_objectives = vector[3]\n        \n        # Store the combined fingerprint\n        self.fingerprints[key] = (combined_fp, num_variables, num_objectives)\n        \n        # Divide the fingerprint into bands and store in buckets with the integers as part of the key\n        for band_idx in range(self.num_bands):\n            start = band_idx * self.band_width\n            end = start + self.band_width\n            band_fp = (tuple(combined_fp[start:end]), num_variables, num_objectives)\n            self.buckets[band_idx][band_fp].add(key)\n\n            \n    def query(self, vector):\n        \"\"\"\n        Query similar vectors in the cache.\n\n        Parameters:\n        -----------\n        vector: tuple of (str, str, int, int)\n            The multidimensional vector to find similar items to. The format is:\n            (task_name, variable_names, num_variables, num_objectives)\n\n        Returns:\n        --------\n        set\n            A set of keys of similar vectors.\n        \"\"\"\n        if vector is None:\n            return set()\n        similar_items = set()\n        combined_fp = []\n        for dimension in vector[:2]:  # Only take the first two string dimensions\n            combined_fp.extend(self.hasher.fingerprint(dimension))\n\n        num_variables = vector[2]\n        num_objectives = vector[3]\n\n        # Check for similarity across all bands\n        for band_idx in range(self.num_bands):\n            start = band_idx * self.band_width\n            end = start + self.band_width\n            band_fp = (tuple(combined_fp[start:end]), num_variables, num_objectives)\n            if band_fp in self.buckets[band_idx]:\n                similar_items.update(self.buckets[band_idx][band_fp])\n\n        return similar_items\n        \n        \n\nif __name__ == \"__main__\":\n    # Example usage assuming MinHasher class is defined and imported correctly.\n    hasher = MinHasher(num_hashes=200, char_ngram=2, random_state=42)\n    lsh_cache = LSHCache(hasher, num_bands=10)\n    lsh_cache.add(\"doc1\", (\"parameters1\", \"objectives1\", 10, 5))\n    lsh_cache.add(\"doc2\", (\"parameters2\", \"objectives2\", 10, 5))\n    print(lsh_cache.query((\"parameters2\", \"objectives2\", 10, 5)))"
  },
  {
    "path": "transopt/datamanager/manager.py",
    "content": "# import cProfile\n# import pstats\n\nfrom transopt.datamanager.database import Database\nfrom transopt.datamanager.lsh import LSHCache\nfrom transopt.datamanager.minhash import MinHasher\n\nfrom transopt.utils.log import logger\n\n\nclass DataManager:\n    _instance = None\n    _initialized = False  # 用于保证初始化代码只运行一次\n\n    def __new__(cls, *args, **kwargs):\n        if cls._instance is None:\n            cls._instance = super(DataManager, cls).__new__(cls)\n            cls._instance._initialized = False\n        return cls._instance\n    \n    def __init__(\n        self, db=None, num_hashes=100, char_ngram=5, num_bands=25, random_state=12345\n    ):\n        if not self._initialized:\n            if db is None:\n                self.db = Database()\n            else:\n                self.db = db\n\n            self._initialize_lsh_cache(num_hashes, char_ngram, num_bands, random_state)\n            self._initialized = True\n\n    def _initialize_lsh_cache(self, num_hashes, char_ngram, num_bands, random_state):\n        hasher = MinHasher(\n            num_hashes=num_hashes, char_ngram=char_ngram, random_state=random_state\n        )\n        self.lsh_cache = LSHCache(hasher, num_bands=num_bands)\n\n        datasets = self.db.get_experiment_datasets()\n\n        for dataset in datasets:\n            dataset_info = self.db.query_dataset_info(dataset)\n            self._add_lsh_vector(dataset, dataset_info)\n\n    def _add_lsh_vector(self, dataset_name, dataset_info):\n        vector = self._construct_vector(dataset_info)\n        self.lsh_cache.add(dataset_name, vector)\n\n    def _construct_vector(self, dataset_info):\n        try:\n            num_variables = dataset_info.get(\"num_variables\", len(dataset_info[\"variables\"]))\n            num_objectives = dataset_info.get(\"num_objectives\", len(dataset_info[\"objectives\"]))\n\n            variables = dataset_info[\"variables\"]\n            variable_names = \" \".join([var[\"name\"] for var in variables])\n            \n            task_name = dataset_info[\"additional_config\"]['problem_name']\n            return (task_name, variable_names, num_variables, num_objectives)\n        except KeyError:\n            logger.error(\n                f\"\"\"\n                Dataset does not have the required information. \n                (num_variables, num_objectives, variables)\n                \"\"\"\n            )\n            return None\n\n    def search_similar_datasets(self, problem_config):\n        vector = self._construct_vector(problem_config)\n        similar_datasets = self.lsh_cache.query(vector)\n        return similar_datasets\n\n    def search_datasets_by_name(self, dataset_name):\n        all_tables = self.db.get_all_datasets()\n        matching_tables = [\n            table for table in all_tables if dataset_name.lower() in table.lower()\n        ]\n        return matching_tables\n\n    def get_dataset_info(self, dataset_name):\n        return self.db.query_dataset_info(dataset_name)\n\n    def get_experiment_datasets(self):\n        return self.db.get_experiment_datasets()\n    \n    def get_all_datasets(self):\n        return self.db.get_all_datasets()\n\n    def create_dataset(self, dataset_name, dataset_info, overwrite=True):\n        self.db.create_table(dataset_name, dataset_info, overwrite)\n        \n        dataset_info_extended = self.db.query_dataset_info(dataset_name)\n        self._add_lsh_vector(dataset_name, dataset_info_extended)\n\n    def insert_data(self, dataset_name, data):\n        return self.db.insert_data(dataset_name, data)\n\n    def remove_dataset(self, dataset_name):\n        return self.db.remove_table(dataset_name)\n\n    def teardown(self):\n        self._instance = None\n        self._initialized = False\n        self.db.close()\n\n\ndef main():\n    dm = DataManager(num_hashes=200, char_ngram=5, num_bands=100)\n\n    dataset = dm.db.get_table_list()[0]\n    test_query = dm.db.query_dataset_info(dataset)\n\n    sd = dm.search_similar_datasets(dataset, test_query)\n\n    print(dm.db.get_table_list()[:2])\n\n    dm.teardown()\n\n\nif __name__ == \"__main__\":\n    pass\n    # profiler = cProfile.Profile()\n    # profiler.run(\"main()\")\n    # stats = pstats.Stats(profiler)\n    # stats.strip_dirs().sort_stats(\"time\").print_stats(10)\n"
  },
  {
    "path": "transopt/datamanager/minhash.py",
    "content": "from concurrent.futures import ThreadPoolExecutor\n\nimport mmh3\nimport numpy as np\n\n\nclass MinHasher:\n    def __init__(self, num_hashes, char_ngram, random_state=None):\n        \"\"\"\n        Parameters:\n        -----------\n        num_hashes: int\n            The number of hash functions to use. A minhash is computed for each\n            hash function derived from different random seeds.\n\n        char_ngram: int\n            The number of consecutive characters to include in a sliding window\n            when creating the document shingles.\n\n        random_state: None, int, np.random.RandomState\n            A random state to initialise the random number generator with.\n        \"\"\"\n        self.num_hashes = num_hashes\n        self.char_ngram = char_ngram\n\n        random_state = np.random.RandomState(random_state)\n        self._seeds = random_state.randint(0, 1e6, size=num_hashes)\n\n    @property\n    def num_seeds(self):\n        return len(self._seeds)\n\n    def get_shingles(self, text):\n        \"\"\"Extract character-based shingles from text.\"\"\"\n        return set(\n            text[i : i + self.char_ngram]\n            for i in range(len(text) - self.char_ngram + 1)\n        )\n\n    def fingerprint(self, text):\n        shingles = self.get_shingles(text)\n        minhashes = [float(\"inf\")] * self.num_hashes\n        for shingle in shingles:\n            # Ensure the input is in bytes for mmh3\n            encoded_shingle = shingle.encode(\"utf-8\")\n            for i, seed in enumerate(self._seeds):\n                hash_val = mmh3.hash(encoded_shingle, int(seed)) % (2**32)\n                if hash_val < minhashes[i]:\n                    minhashes[i] = hash_val\n        \n        return minhashes\n    \n    def estimate_similarity(self, fp1, fp2):\n        return sum(1 for x, y in zip(fp1, fp2) if x == y) / self.num_hashes\n\n\ndef jaccard_similarity(set1, set2):\n    if not isinstance(set1, set):\n        set1 = set(set1)\n    if not isinstance(set2, set):\n        set2 = set(set2)\n    return len(set1.intersection(set2)) / len(set1.union(set2))\n\n\nif __name__ == \"__main__\":\n    text1 = \"Lorem Ipsum dolor sit ametsdaasdsad\"\n    text2 = \"Lorem Ipsum dolor sit amet is how dummy text starts\"\n\n    # Create a MinHasher instance\n    hasher = MinHasher(num_hashes=100, char_ngram=2, random_state=12345)\n\n    # Compute shingles for both texts\n    shingles1 = hasher.get_shingles(text1)\n    shingles2 = hasher.get_shingles(text2)\n\n    # Compute MinHashes for both texts\n    fp1 = hasher.fingerprint(text1)\n    fp2 = hasher.fingerprint(text2)\n\n    # Comparing MinHash signatures to estimate similarity\n    estimated_similarity = hasher.estimate_similarity(fp1, fp2)\n    print(f\"Estimated similarity: {estimated_similarity:.4f}\")\n    print(\n        f\"Jaccard similarity: {jaccard_similarity(hasher.get_shingles(text1), hasher.get_shingles(text2)):.4f}\"\n    )\n"
  },
  {
    "path": "transopt/optimizer/MultiObjOptimizer/CauMOpt.py",
    "content": "import numpy as np\nimport GPy\nfrom typing import Dict, Union, List\n\nfrom transopt.optimizer.optimizer_base import BOBase\nfrom agent.registry import optimizer_register\nfrom transopt.utils.Normalization import get_normalizer\nfrom transopt.utils.serialization import ndarray_to_vectors,vectors_to_ndarray\n\nfrom sklearn.ensemble import ExtraTreesRegressor\n\n\n\ndef calculate_gini_index(labels):\n    _, counts = np.unique(labels, return_counts=True)\n    probabilities = counts / counts.sum()\n    gini = 1 - sum(probabilities ** 2)\n    return gini\n\n\ndef features_by_gini(data, labels):\n    features_gini = []\n\n    # 遍历每个特征\n    for feature_idx in range(data.shape[1]):\n        current_feature_values = data[:, feature_idx]\n\n        # 假设的分割方式：基于每个值的基尼指数\n        gini_indexes = []\n        for split_value in np.unique(current_feature_values):\n            left_split = labels[current_feature_values <= split_value]\n            right_split = labels[current_feature_values > split_value]\n\n            # 计算左右分割的加权基尼指数\n            left_gini = calculate_gini_index(left_split)\n            right_gini = calculate_gini_index(right_split)\n            weighted_gini = (len(left_split) * left_gini + len(right_split) * right_gini) / len(labels)\n            gini_indexes.append(weighted_gini)\n\n        # 取这个特征下的最小基尼指数\n        min_gini = min(gini_indexes) if gini_indexes else 1  # 防止某个特征下所有值相同\n        features_gini.append((feature_idx, min_gini))\n\n    return features_gini\n\n@optimizer_register(\"CauMO\")\nclass CauMO(BOBase):\n    def __init__(self, config: Dict, rate_oversampling = 4, seed = 0, **kwargs):\n        super(CauMO, self).__init__(config=config)\n\n        self.init_method = \"Random\"\n        self.verbose = config.get(\"verbose\", True)\n        self.pop_size = config.get(\"pop_size\", 10)\n        self.ini_num = self.pop_size\n\n        self.second_space = None\n        self.third_space = None\n\n        self.model = []\n        self.acf = \"CauMOACF\"\n\n        self.rate_oversampling = rate_oversampling\n        self.num_duplicates = int(rate_oversampling * 4.0)\n        self.seed = seed\n\n    def initial_sample(self):\n        return self.random_sample(self.ini_num)\n\n    def random_sample(self, num_samples: int) -> List[Dict]:\n        \"\"\"\n        Initialize random samples.\n\n        :param num_samples: Number of random samples to generate\n        :return: List of dictionaries, each representing a random sample\n        \"\"\"\n        if self.input_dim is None:\n            raise ValueError(\n                \"Input dimension is not set. Call set_search_space() to set the input dimension.\"\n            )\n\n        random_samples = []\n        for _ in range(num_samples):\n            sample = {}\n            for var_info in self.search_space.config_space:\n                var_name = var_info[\"name\"]\n                var_domain = var_info[\"domain\"]\n                # Generate a random floating-point number within the specified range\n                random_value = np.random.uniform(var_domain[0], var_domain[1])\n                sample[var_name] = random_value\n            random_samples.append(sample)\n\n        random_samples = self.inverse_transform(random_samples)\n        return random_samples\n\n    def update_model(self, Data):\n        Target_Data = Data[\"Target\"]\n        assert \"X\" in Target_Data\n\n        X = Target_Data[\"X\"]\n        Y = Target_Data[\"Y\"]\n        assert Y.shape[0] == self.num_objective\n\n        if self.normalizer is not None:\n            Y_norm = np.array([self.normalizer(y) for y in Y])\n\n        if len(self.model_list) == 0:\n            self.create_model(X, Y_norm)\n        else:\n            # self.set_data(X, Y_norm)\n            self.fit_data(X, Y_norm)\n        # try:\n        #     for i in range(len(self.model_list)):\n        #         self.model_list[i].optimize_restarts(\n        #             num_restarts=1, verbose=self.verbose, robust=True\n        #         )\n        # except np.linalg.linalg.LinAlgError as e:\n        #     # break\n        #     print(\"Error: np.linalg.linalg.LinAlgError\")\n\n        self.Y_Norm = None\n    def create_model(self, X, Y):\n        assert self.num_objective is not None\n\n        compile_time_model = ExtraTreesRegressor(\n         n_estimators=200,\n         max_features='sqrt',\n         bootstrap=True,\n         random_state=self.seed,\n         max_samples = self.rate_oversampling / self.num_duplicates,\n        )\n        compile_time_model.fit(X, Y[2][:, np.newaxis])\n\n        # Kc = GPy.kern.RBF(input_dim=self.input_dim)\n        # compile_time_model = GPy.models.GPRegression(X, Y[2][:, np.newaxis], kernel=Kc, normalizer=None)\n\n\n        file_size_feature_rank = features_by_gini(X, Y[1])\n        self.file_size_rep_feature = sorted(file_size_feature_rank, key=lambda x: x[1])[0][0]\n\n        X_file = X.copy()\n        X_file[:, self.file_size_rep_feature] = np.clip(2 * (Y[2] - (-3)) / 6 - 1, -1, 1)\n\n        file_size_model = ExtraTreesRegressor(\n         n_estimators=200,\n         max_features='sqrt',\n         bootstrap=True,\n         random_state=self.seed,\n         max_samples = self.rate_oversampling / self.num_duplicates,\n        )\n\n        file_size_model.fit(X_file, Y[1][:, np.newaxis])\n\n        # Kf = GPy.kern.RBF(input_dim=self.input_dim)\n        # file_size_model = GPy.models.GPRegression(X_file, Y[1][:, np.newaxis], kernel=Kf, normalizer=None)\n\n        run_time_feature_rank = features_by_gini(X, Y[0])\n        run_time_feature_rank = sorted(run_time_feature_rank, key=lambda x: x[1])\n        self.st_run_time_rep_feature = run_time_feature_rank[0][0]\n        self.nd_run_time_rep_feature = run_time_feature_rank[1][0]\n        X_rtime = X.copy()\n        X_rtime[:, self.st_run_time_rep_feature] = np.clip(2 * (Y[2] - (-3)) / 6 - 1, -1, 1)\n        X_rtime[:, self.nd_run_time_rep_feature] = np.clip(2 * (Y[1] - (-3)) / 6 - 1, -1, 1)\n        # Kr = GPy.kern.RBF(input_dim=self.input_dim)\n        # running_time_model = GPy.models.GPRegression(X_rtime, Y[0][:, np.newaxis], kernel=Kr, normalizer=None)\n        running_time_model = ExtraTreesRegressor(\n         n_estimators=200,\n         max_features='sqrt',\n         bootstrap=True,\n         random_state=self.seed,\n         max_samples = self.rate_oversampling / self.num_duplicates,\n        )\n        running_time_model.fit(X_rtime, Y[0][:, np.newaxis])\n\n        # compile_time_model['.*Gaussian_noise.variance'].constrain_fixed(1.0e-4)\n        # compile_time_model['.*rbf.variance'].constrain_fixed(1.0)\n        # file_size_model['.*Gaussian_noise.variance'].constrain_fixed(1.0e-4)\n        # file_size_model['.*rbf.variance'].constrain_fixed(1.0)\n        # running_time_model['.*Gaussian_noise.variance'].constrain_fixed(1.0e-4)\n        # running_time_model['.*rbf.variance'].constrain_fixed(1.0)\n\n        self.model_list.append(compile_time_model)\n        self.model_list.append(file_size_model)\n        self.model_list.append(running_time_model)\n\n    def set_data(self, X, Y):\n        self.model_list[0].set_XY(X, Y[2][:, np.newaxis])\n        file_size_feature_rank = features_by_gini(X, Y[1])\n        self.file_size_rep_feature = sorted(file_size_feature_rank, key=lambda x: x[1])[0][0]\n        X_file = X.copy()\n        X_file[:, self.file_size_rep_feature] = np.clip(2 * (Y[1] - (-3)) / 6 - 1, -1, 1)\n        self.model_list[1].set_XY(X_file, Y[1][:, np.newaxis])\n\n        run_time_feature_rank = features_by_gini(X, Y[0])\n        run_time_feature_rank = sorted(run_time_feature_rank, key=lambda x: x[1])\n        self.st_run_time_rep_feature = run_time_feature_rank[0][0]\n        self.nd_run_time_rep_feature = run_time_feature_rank[1][0]\n        X_rtime = X.copy()\n        X_rtime[:, self.st_run_time_rep_feature] = np.clip(2 * (Y[2] - (-3)) / 6 - 1, -1, 1)\n        X_rtime[:, self.nd_run_time_rep_feature] = np.clip(2 * (Y[1] - (-3)) / 6 - 1, -1, 1)\n        self.model_list[2].set_XY(X_rtime, Y[0][:, np.newaxis])\n\n\n    def fit_data(self, X, Y):\n        self.model_list[0].fit(X, Y[2][:, np.newaxis])\n        file_size_feature_rank = features_by_gini(X, Y[1])\n        self.file_size_rep_feature = sorted(file_size_feature_rank, key=lambda x: x[1])[0][0]\n        X_file = X.copy()\n        X_file[:, self.file_size_rep_feature] = np.clip(2 * (Y[1] - (-3)) / 6 - 1, -1, 1)\n        self.model_list[1].fit(X_file, Y[1][:, np.newaxis])\n\n        run_time_feature_rank = features_by_gini(X, Y[0])\n        run_time_feature_rank = sorted(run_time_feature_rank, key=lambda x: x[1])\n        self.st_run_time_rep_feature = run_time_feature_rank[0][0]\n        self.nd_run_time_rep_feature = run_time_feature_rank[1][0]\n        X_rtime = X.copy()\n        X_rtime[:, self.st_run_time_rep_feature] = np.clip(2 * (Y[2] - (-3)) / 6 - 1, -1, 1)\n        X_rtime[:, self.nd_run_time_rep_feature] = np.clip(2 * (Y[1] - (-3)) / 6 - 1, -1, 1)\n        self.model_list[2].fit(X_rtime, Y[0][:, np.newaxis])\n\n\n    def suggest(self, n_suggestions: Union[None, int] = None) -> List[Dict]:\n        if self._X.size == 0:\n            suggests = self.initial_sample()\n            return suggests\n        elif self._X.shape[0] < self.ini_num:\n            pass\n        else:\n            if \"normalize\" in self.config:\n                self.normalizer = get_normalizer(self.config[\"normalize\"])\n\n            Data = {\"Target\": {\"X\": self._X, \"Y\": self._Y}}\n            self.update_model(Data)\n            suggested_sample, acq_value = self.evaluator.compute_batch(\n                None, context_manager=None\n            )\n            suggested_sample = self.search_space.zip_inputs(suggested_sample)\n            suggested_sample = ndarray_to_vectors(\n                self._get_var_name(\"search\"), suggested_sample\n            )\n            design_suggested_sample = self.inverse_transform(suggested_sample)\n\n            return design_suggested_sample\n\n    def observe(self, input_vectors: Union[List[Dict], Dict], output_value: Union[List[Dict], Dict]) -> None:\n        super().observe(input_vectors, output_value)\n\n\n        if \"normalize\" in self.config:\n            self.normalizer = get_normalizer(self.config[\"normalize\"])\n\n        self.Y_Norm = np.array([self.normalizer(y) for y in self._Y])\n    def predict(self, X, full_cov=False):\n\n        pred_mean = np.zeros((X.shape[0], 0))\n        if full_cov:\n            pred_var = np.zeros((0, X.shape[0], X.shape[0]))\n        else:\n            pred_var = np.zeros((X.shape[0], 0))\n\n        # compile_time_mean, compile_time_var = self.model_list[0].predict(X, full_cov=full_cov)\n        compile_time_mean, compile_time_var = self.raw_predict(X, self.model_list[0])\n        X_file = X.copy()\n        X_file[:, self.file_size_rep_feature] = np.clip(2 * (compile_time_mean[:, 0] - (-3)) / 6 - 1, -1, 1)\n        file_size_mean, file_size_var = self.raw_predict(X_file, self.model_list[1])\n\n        X_run = X.copy()\n        X_run[:, self.st_run_time_rep_feature] = np.clip(2 * (compile_time_mean[:, 0] - (-3)) / 6 - 1, -1, 1)\n        X_run[:, self.nd_run_time_rep_feature] = np.clip(2 * (file_size_mean[:, 0] - (-3)) / 6 - 1, -1, 1)\n        run_time_mean, run_time_var = self.raw_predict(X_run, self.model_list[2])\n\n        pred_mean = np.hstack((pred_mean, run_time_mean, file_size_mean, compile_time_mean))\n\n        if full_cov:\n            pred_var = np.hstack((pred_var, run_time_var, file_size_var, compile_time_var))\n        else:\n            pred_var = np.hstack((pred_var, run_time_var, file_size_var, compile_time_var))\n\n        # pred_mean = np.append(pred_mean, run_time_mean)\n        # pred_mean = np.append(pred_mean, file_size_mean)\n        # pred_mean = np.append(pred_mean, compile_time_mean)\n        # pred_var = np.append(pred_var, run_time_var)\n        # pred_var = np.append(pred_var, file_size_var)\n        # pred_var = np.append(pred_var, compile_time_var)\n\n        return pred_mean, pred_var\n\n    def raw_predict(self, X, model):\n        _X_test = X.copy()\n\n        mu = model.predict(_X_test)\n        cov = self.raw_predict_var(_X_test, model, mu)\n        return mu[:,np.newaxis], cov[:,np.newaxis]\n\n    def raw_predict_var(self, X, trees,  predictions, min_variance=0.1):\n        std = np.zeros(len(X))\n        for tree in trees:\n            var_tree = tree.tree_.impurity[tree.apply(X)]\n\n            # This rounding off is done in accordance with the\n            # adjustment done in section 4.3.3\n            # of http://arxiv.org/pdf/1211.0906v2.pdf to account\n            # for cases such as leaves with 1 sample in which there\n            # is zero variance.\n            var_tree[var_tree < min_variance] = min_variance\n            mean_tree = tree.predict(X)\n            std += var_tree + mean_tree ** 2\n\n        std /= len(trees)\n        std -= predictions ** 2.0\n        std[std < 0.0] = 0.0\n        std = std ** 0.5\n        return std\n    def model_reset(self):\n        self.model_list = []\n        self.kernel_list = []\n\n    def get_fmin(self):\n        \"Get the minimum of the current model.\"\n        m, v = self.predict(self._X)\n\n        return m.min()\n\n    def get_fmin_by_id(self, idx):\n        \"Get the minimum of the current model.\"\n        m, v = self.predict_by_id(self._X, idx)\n\n        return m.min()\n"
  },
  {
    "path": "transopt/optimizer/MultiObjOptimizer/IEIPV.py",
    "content": ""
  },
  {
    "path": "transopt/optimizer/MultiObjOptimizer/MoeadEGO.py",
    "content": "import GPy, GPyOpt\nimport numpy as np\nfrom typing import Dict, Union, List\n\nfrom transopt.optimizer.optimizer_base import BOBase\nfrom transopt.utils.serialization import ndarray_to_vectors\nfrom agent.registry import optimizer_register\nfrom transopt.utils.Normalization import get_normalizer\nfrom transopt.utils.weights import init_weight, tchebycheff\n\n# from utils.common import findKBest\n# from revision.multiobjective_bayesian_optimization import MultiObjectiveBayesianOptimization\n# from revision.weighted_gpmodel import WeightedGPModel\n# from revision.multiobjective_EI import MultiObjectiveAcquisitionEI\n\n@optimizer_register(\"MoeadEGO\")\nclass MoeadEGO(BOBase):\n    def __init__(self, config: Dict, **kwargs):\n        super(MoeadEGO, self).__init__(config=config)\n\n        self.init_method = \"Random\"\n        self.verbose = config.get(\"verbose\", True)\n        self.n_weight = config.get(\"n_weight\", 10)\n        self.pop_size = config.get(\"pop_size\", self.n_weight)\n\n        if self.pop_size > self.n_weight:\n            self.pop_size = self.n_weight\n\n        self.ini_num = self.pop_size\n        \n        self.model = []\n        self.acf = \"MOEADEGO\"\n        self.weight = None\n\n    def initial_sample(self):\n        return self.random_sample(self.ini_num)\n\n    def random_sample(self, num_samples: int) -> List[Dict]:\n        \"\"\"\n        Initialize random samples.\n\n        :param num_samples: Number of random samples to generate\n        :return: List of dictionaries, each representing a random sample\n        \"\"\"\n        if self.input_dim is None:\n            raise ValueError(\n                \"Input dimension is not set. Call set_search_space() to set the input dimension.\"\n            )\n\n        random_samples = []\n        for _ in range(num_samples):\n            sample = {}\n            for var_info in self.search_space.config_space:\n                var_name = var_info[\"name\"]\n                var_domain = var_info[\"domain\"]\n                # Generate a random floating-point number within the specified range\n                random_value = np.random.uniform(var_domain[0], var_domain[1])\n                sample[var_name] = random_value\n            random_samples.append(sample)\n\n        random_samples = self.inverse_transform(random_samples)\n        return random_samples\n\n\n    def update_model(self, Data):\n        Target_Data = Data[\"Target\"]\n        assert \"X\" in Target_Data\n\n        X = Target_Data[\"X\"]\n        Y = Target_Data[\"Y\"]\n        assert Y.shape[0] == self.num_objective\n\n        if self.normalizer is not None:\n            Y_norm = np.array([self.normalizer(y) for y in Y])\n\n        if len(self.model_list) == 0:\n            self.create_model(X, Y_norm)\n        else:\n            ideal_point = np.min(Y_norm.T, axis=0)\n            for i in range(len(self.model_list)):\n                Y_weighted = tchebycheff(Y.T, self.weight[i], ideal=ideal_point)\n                self.model_list[i].set_XY(X, Y_weighted)\n\n        try:\n            for i in range(len(self.model_list)):\n                self.model_list[i].optimize_restarts(\n                    num_restarts=1, verbose=self.verbose, robust=True\n                )\n        except np.linalg.linalg.LinAlgError as e:\n            # break\n            print(\"Error: np.linalg.linalg.LinAlgError\")\n\n    def create_model(self, X, Y):\n        assert self.num_objective is not None\n\n        ideal_point = np.min(Y.T, axis=0)\n        self.weight = init_weight(self.num_objective, self.n_weight)\n        self.n_weight = self.weight.shape[0]\n\n        for i in range(self.n_weight):\n            kernel = GPy.kern.RBF(input_dim = self.input_dim)\n\n            Y_weighted = tchebycheff(Y.T, self.weight[i], ideal=ideal_point)\n\n            model = GPy.models.GPRegression(X, Y_weighted, kernel=kernel, normalizer=None)\n            model['.*Gaussian_noise.variance'].constrain_fixed(1.0e-4)\n            model['.*rbf.variance'].constrain_fixed(1.0)\n            self.kernel_list.append(model.kern)\n            self.model_list.append(model)\n\n    def suggest(self, n_suggestions: Union[None, int] = None) -> List[Dict]:\n        if self._X.size == 0:\n            suggests = self.initial_sample()\n            return suggests\n        elif self._X.shape[0] < self.ini_num:\n            pass\n        else:\n            if \"normalize\" in self.config:\n                self.normalizer = get_normalizer(self.config[\"normalize\"])\n\n            Data = {\"Target\": {\"X\": self._X, \"Y\": self._Y}}\n            self.update_model(Data)\n            suggested_sample, acq_value = self.evaluator.compute_batch(\n                None, context_manager=None\n            )\n            suggested_sample = self.search_space.zip_inputs(suggested_sample)\n            suggested_sample = ndarray_to_vectors(\n                self._get_var_name(\"search\"), suggested_sample\n            )\n            design_suggested_sample = self.inverse_transform(suggested_sample)\n\n            return design_suggested_sample\n\n    def predict(self, X, full_cov=False):\n        # X_copy = np.array([X])\n        pred_mean = np.zeros((X.shape[0], 0))\n        if full_cov:\n            pred_var = np.zeros((0, X.shape[0], X.shape[0]))\n        else:\n            pred_var = np.zeros((X.shape[0], 0))\n        for model in self.model_list:\n            mean, var = model.predict(X, full_cov=full_cov)\n            pred_mean = np.append(pred_mean, mean, axis=1)\n            if full_cov:\n                pred_var = np.append(pred_var, [var], axis=0)\n            else:\n                pred_var = np.append(pred_var, var, axis=1)\n        return pred_mean, pred_var\n\n    def predict_by_id(self, X, idx, full_cov=False):\n        pred_mean = np.zeros((X.shape[0], 0))\n        if full_cov:\n            pred_var = np.zeros((0, X.shape[0], X.shape[0]))\n        else:\n            pred_var = np.zeros((X.shape[0], 0))\n            mean, var = self.model_list[idx].predict(X, full_cov=full_cov)\n            pred_mean = np.append(pred_mean, mean, axis=1)\n            if full_cov:\n                pred_var = np.append(pred_var, [var], axis=0)\n            else:\n                pred_var = np.append(pred_var, var, axis=1)\n        return pred_mean, pred_var\n\n    def model_reset(self):\n        self.model_list = []\n        self.kernel_list = []\n\n    def get_fmin(self):\n        \"Get the minimum of the current model.\"\n        m, v = self.predict(self._X)\n\n        return m.min()\n\n    def get_fmin_by_id(self, idx):\n        \"Get the minimum of the current model.\"\n        m, v = self.predict_by_id(self._X, idx)\n\n        return m.min()\n"
  },
  {
    "path": "transopt/optimizer/MultiObjOptimizer/ParEGO.py",
    "content": "import numpy as np\nimport GPy\nfrom typing import Dict, Union, List\n\nfrom transopt.optimizer.optimizer_base import BOBase\nfrom transopt.utils.serialization import ndarray_to_vectors\nfrom agent.registry import optimizer_register\nfrom transopt.utils.Normalization import get_normalizer\n\n\n@optimizer_register(\"ParEGO\")\nclass ParEGO(BOBase):\n    def __init__(self, config: Dict, **kwargs):\n        super(ParEGO, self).__init__(config=config)\n\n        self.init_method = \"Random\"\n\n        if \"verbose\" in config:\n            self.verbose = config[\"verbose\"]\n        else:\n            self.verbose = True\n\n        if \"init_number\" in config:\n            self.ini_num = config[\"init_number\"]\n        else:\n            self.ini_num = None\n\n        self.acf = \"EI\"\n        self.rho = 0.1\n\n    def scalarization(self, Y: np.ndarray, rho):\n        \"\"\"\n        scalarize observed output data\n        \"\"\"\n        theta = np.random.random_sample(Y.shape[0])\n        sum_theta = np.sum(theta)\n        theta = theta / sum_theta\n\n        theta_f = Y.T * theta\n        max_k = np.max(theta_f, axis=1)\n        rho_sum_theta_f = rho * np.sum(theta_f, axis=1)\n\n        return max_k + rho_sum_theta_f\n\n    def initial_sample(self):\n        return self.random_sample(self.ini_num)\n\n    def suggest(self, n_suggestions: Union[None, int] = None) -> List[Dict]:\n        return self.random_sample(self.ini_num)\n        \n        if self._X.size == 0:\n            suggests = self.initial_sample()\n            return suggests\n        elif self._X.shape[0] < self.ini_num:\n            pass\n        else:\n            if \"normalize\" in self.config:\n                self.normalizer = get_normalizer(self.config[\"normalize\"])\n\n            Data = {\"Target\": {\"X\": self._X, \"Y\": self._Y}}\n            self.update_model(Data)\n            suggested_sample, acq_value = self.evaluator.compute_batch(\n                None, context_manager=None\n            )\n            suggested_sample = self.search_space.zip_inputs(suggested_sample)\n            suggested_sample = ndarray_to_vectors(\n                self._get_var_name(\"search\"), suggested_sample\n            )\n            design_suggested_sample = self.inverse_transform(suggested_sample)\n\n            return design_suggested_sample\n\n    def update_model(self, Data):\n        Target_Data = Data[\"Target\"]\n        assert \"X\" in Target_Data\n\n        X = Target_Data[\"X\"]\n        Y = Target_Data[\"Y\"]\n        assert Y.shape[0] == self.num_objective\n\n        if self.normalizer is not None:\n            Y_norm = np.array([self.normalizer(y) for y in Y])\n\n        Y_scalar = self.scalarization(Y_norm, 0.1)[:, np.newaxis]\n\n        if len(self.model_list) == 0:\n            self.create_model(X, Y_scalar)\n        else:\n            self.model_list[0].set_XY(X, Y_scalar)\n\n        try:\n            self.model_list[0].optimize_restarts(\n                num_restarts=1, verbose=self.verbose, robust=True\n            )\n        except np.linalg.linalg.LinAlgError as e:\n            # break\n            print(\"Error: np.linalg.linalg.LinAlgError\")\n\n    def create_model(self, X, Y):\n        assert self.num_objective is not None\n\n        kernel = GPy.kern.RBF(input_dim=self.input_dim)\n        model = GPy.models.GPRegression(X, Y, kernel=kernel, normalizer=None)\n        model[\".*Gaussian_noise.variance\"].constrain_fixed(1.0e-4)\n        model[\".*rbf.variance\"].constrain_fixed(1.0)\n        self.kernel_list.append(model.kern)\n        self.model_list.append(model)\n        print(\"model state\")\n        for i, model in enumerate(self.model_list):\n            print(\"--------model for {}th object--------\".format(i))\n            print(model)\n\n    def predict(self, X, full_cov=False):\n        # X_copy = np.array([X])\n        pred_mean = np.zeros((X.shape[0], 0))\n        if full_cov:\n            pred_var = np.zeros((0, X.shape[0], X.shape[0]))\n        else:\n            pred_var = np.zeros((X.shape[0], 0))\n        for model in self.model_list:\n            mean, var = model.predict(X, full_cov=full_cov)\n            pred_mean = np.append(pred_mean, mean, axis=1)\n            if full_cov:\n                pred_var = np.append(pred_var, [var], axis=0)\n            else:\n                pred_var = np.append(pred_var, var, axis=1)\n        return pred_mean, pred_var\n\n    def random_sample(self, num_samples: int) -> List[Dict]:\n        \"\"\"\n        Initialize random samples.\n\n        :param num_samples: Number of random samples to generate\n        :return: List of dictionaries, each representing a random sample\n        \"\"\"\n        if self.input_dim is None:\n            raise ValueError(\n                \"Input dimension is not set. Call set_search_space() to set the input dimension.\"\n            )\n\n        random_samples = []\n        for _ in range(num_samples):\n            sample = {}\n            for var_info in self.search_space.config_space:\n                var_name = var_info[\"name\"]\n                var_domain = var_info[\"domain\"]\n                # Generate a random floating-point number within the specified range\n                random_value = np.random.uniform(var_domain[0], var_domain[1])\n                sample[var_name] = random_value\n            random_samples.append(sample)\n\n        random_samples = self.inverse_transform(random_samples)\n        return random_samples\n\n    def model_reset(self):\n        self.model_list = []\n        self.kernel_list = []\n\n    def get_fmin(self):\n        \"Get the minimum of the current model.\"\n        m, v = self.predict(self._X)\n\n        return m.min()\n"
  },
  {
    "path": "transopt/optimizer/MultiObjOptimizer/SMSEGO.py",
    "content": "import numpy as np\nimport GPy\nfrom typing import Dict, Union, List\nfrom transopt.optimizer.optimizer_base import BOBase\nfrom transopt.utils.serialization import ndarray_to_vectors\nfrom agent.registry import optimizer_register\nfrom agent.registry import optimizer_register\n\nfrom transopt.utils.Normalization import get_normalizer\n\n\n\n\n@optimizer_register('SMSEGO')\nclass SMSEGO(BOBase):\n    def __init__(self, config:Dict, **kwargs):\n        super(SMSEGO, self).__init__(config=config)\n\n        self.init_method = 'Random'\n\n        if 'verbose' in config:\n            self.verbose = config['verbose']\n        else:\n            self.verbose = True\n\n        if 'init_number' in config:\n            self.ini_num = config['init_number']\n        else:\n            self.ini_num = None\n\n        self.acf = 'SMSEGO'\n\n    def initial_sample(self):\n        return self.random_sample(self.ini_num)\n\n    def suggest(self, n_suggestions:Union[None, int] = None) ->List[Dict]:\n        if self._X.size == 0:\n            suggests = self.initial_sample()\n            return suggests\n        elif self._X.shape[0] < self.ini_num:\n            pass\n        else:\n            if 'normalize' in self.config:\n                self.normalizer = get_normalizer(self.config['normalize'])\n\n\n            Data = {'Target':{'X':self._X, 'Y':self._Y}}\n            self.update_model(Data)\n            suggested_sample, acq_value = self.evaluator.compute_batch(None, context_manager=None)\n            suggested_sample = self.search_space.zip_inputs(suggested_sample)\n            suggested_sample = ndarray_to_vectors(self._get_var_name('search'), suggested_sample)\n            design_suggested_sample = self.inverse_transform(suggested_sample)\n\n            return design_suggested_sample\n\n    def update_model(self, Data):\n        Target_Data = Data['Target']\n        assert 'X' in Target_Data\n\n        X = Target_Data['X']\n        Y = Target_Data['Y']\n        assert Y.shape[0] == self.num_objective\n\n        if self.normalizer is not None:\n            Y_norm = np.array([self.normalizer(y) for y in Y])\n\n\n        if len(self.model_list) == 0:\n            self.create_model(X, Y_norm)\n        else:\n            for i in range(self.num_objective):\n                self.model_list[i].set_XY(X, Y_norm[i].T[:, np.newaxis])\n\n        try:\n            for i in range(self.num_objective):\n                self.model_list[i].optimize_restarts(num_restarts=1, verbose=self.verbose, robust=True)\n        except np.linalg.linalg.LinAlgError as e:\n            # break\n            print('Error: np.linalg.linalg.LinAlgError')\n\n    def create_model(self, X, Y):\n        assert self.num_objective is not None\n        assert self.num_objective == Y.shape[0]\n\n        for l in range(self.num_objective):\n            kernel = GPy.kern.RBF(input_dim = self.input_dim)\n            model = GPy.models.GPRegression(X, Y[l][:, np.newaxis], kernel=kernel, normalizer=None)\n            model['.*Gaussian_noise.variance'].constrain_fixed(1.0e-4)\n            model['.*rbf.variance'].constrain_fixed(1.0)\n            self.kernel_list.append(model.kern)\n            self.model_list.append(model)\n        print(\"model state\")\n        for i, model in enumerate(self.model_list):\n            print(\"--------model for {}th object--------\".format(i))\n            print(model)\n\n    def predict(self, X, full_cov=False):\n        # X_copy = np.array([X])\n        if len(X.shape) ==1 :\n            X = X[np.newaxis,:]\n        pred_mean = np.zeros((X.shape[0], 0))\n        if full_cov:\n            pred_var = np.zeros((0, X.shape[0], X.shape[0]))\n        else:\n            pred_var = np.zeros((X.shape[0], 0))\n        for model in self.model_list:\n            mean, var = model.predict(X, full_cov=full_cov)\n            pred_mean = np.append(pred_mean, mean, axis=1)\n            if full_cov:\n                pred_var = np.append(pred_var, [var], axis=0)\n            else:\n                pred_var = np.append(pred_var, var, axis=1)\n        return pred_mean, pred_var\n\n\n    def random_sample(self, num_samples: int) -> List[Dict]:\n        \"\"\"\n        Initialize random samples.\n\n        :param num_samples: Number of random samples to generate\n        :return: List of dictionaries, each representing a random sample\n        \"\"\"\n        if self.input_dim is None:\n            raise ValueError(\"Input dimension is not set. Call set_search_space() to set the input dimension.\")\n\n        random_samples = []\n        for _ in range(num_samples):\n            sample = {}\n            for var_info in self.search_space.config_space:\n                var_name = var_info['name']\n                var_domain = var_info['domain']\n                # Generate a random floating-point number within the specified range\n                random_value = np.random.uniform(var_domain[0], var_domain[1])\n                sample[var_name] = random_value\n            random_samples.append(sample)\n\n        random_samples = self.inverse_transform(random_samples)\n        return random_samples\n\n    def model_reset(self):\n        self.model_list = []\n        self.kernel_list = []\n\n    def get_fmin(self):\n        \"Get the minimum of the current model.\"\n        pass\n"
  },
  {
    "path": "transopt/optimizer/MultiObjOptimizer/__init__.py",
    "content": "from transopt.optimizer.MultiObjOptimizer.ParEGO import ParEGO\nfrom transopt.optimizer.MultiObjOptimizer.SMSEGO import SMSEGO\nfrom transopt.optimizer.MultiObjOptimizer.CauMOpt import CauMO\nfrom transopt.optimizer.MultiObjOptimizer.MoeadEGO import MoeadEGO"
  },
  {
    "path": "transopt/optimizer/SingleObjOptimizer/KrigingOptimizer.py",
    "content": "import GPy\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom pymoo.algorithms.soo.nonconvex.ga import GA\nfrom pymoo.algorithms.soo.nonconvex.de import DE\nfrom pymoo.algorithms.soo.nonconvex.cmaes import CMAES\nfrom pymoo.algorithms.soo.nonconvex.pso import PSO\nfrom typing import Dict, Union, List\n\nfrom transopt.optimizer.optimizer_base import BOBase\nfrom transopt.utils.serialization import vectors_to_ndarray, output_to_ndarray\nfrom transopt.utils.serialization import ndarray_to_vectors\nfrom agent.registry import optimizer_register\nfrom transopt.utils.Normalization import get_normalizer\n\n\n@optimizer_register('KrigingEA')\nclass KrigingEA(BOBase):\n    def __init__(self, config: Dict, **kwargs):\n        super(KrigingGA, self).__init__(config=config)\n\n        self.init_method = 'latin'\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n        if 'verbose' in config:\n            self.verbose = config['verbose']\n        else:\n            self.verbose = True\n\n        if 'init_number' in config:\n            self.ini_num = config['init_number']\n        else:\n            self.ini_num = None\n\n        if 'ea' in config:\n            self.ea_name = config['ea']\n        else:\n            self.ea_name = 'GA'\n\n        # model_manage: 'best' or 'pre-select' or 'generation'\n        if 'model_manage' in config:\n            self.model_manage = config['model_manage']\n        else:\n            self.model_manage = 'best'\n\n        # 'best':k best individual, 'pre-select' and 'generation': every k generation\n        if 'k' in config:\n            self.k = config['k']\n        else:\n            self.k = 1\n\n        self.pop = None\n        self.pop_num = self.ini_num\n\n    def initial_sample(self):\n        return self.sample(self.ini_num)\n\n    def suggest(self, n_suggestions: Union[None, int] = None) -> List[Dict]:\n        if self._X.size == 0:\n            suggests = self.initial_sample()\n            return suggests\n        else:\n            if 'normalize' in self.config:\n                self.normalizer = get_normalizer(self.config['normalize'])\n\n            Data = {'Target': {'X': self._X, 'Y': self._Y}}\n            self.update_model(Data)\n            self.problem = EAProblem(self.search_space.config_space, self.predict)\n            # 得到新的种群\n            self.pop = self.ea.ask()\n            # 模型管理策略，选择需要准确评估的个体\n            elites = self.model_manage_strategy().reshape(-1, self.input_dim)\n            # 准确评估优秀个体\n            suggested_sample = self.search_space.zip_inputs(elites)\n            suggested_sample = ndarray_to_vectors(self._get_var_name('search'), suggested_sample)\n            design_suggested_sample = self.inverse_transform(suggested_sample)\n\n            return design_suggested_sample\n\n    def observe(self, input_vectors: Union[List[Dict], Dict], output_value: Union[List[Dict], Dict]) -> None:\n        self._data_handler.add_observation(input_vectors, output_value)\n\n        # Convert dict to list of dict\n        if isinstance(input_vectors, Dict):\n            input_vectors = [input_vectors]\n        if isinstance(output_value, Dict):\n            output_value = [output_value]\n\n        # Check if the lists are empty and return if they are\n        if len(input_vectors) == 0 and len(output_value) == 0:\n            return\n\n\n        self._validate_observation('design', input_vectors=input_vectors, output_value=output_value)\n        X = self.transform(input_vectors)\n\n        self._X = np.vstack((self._X, vectors_to_ndarray(self._get_var_name('search'), X))) if self._X.size else vectors_to_ndarray(self._get_var_name('search'), X)\n        self._Y = np.vstack((self._Y, output_to_ndarray(output_value))) if self._Y.size else output_to_ndarray(output_value)\n\n        if self.pop is not None:\n            self.pop[self.elites_idx].F = output_value\n            # 将 pop 返回给 EA\n            self.ea.tell(infills=self.pop)\n\n    def update_model(self, Data):\n        assert 'Target' in Data\n        target_data = Data['Target']\n        X = target_data['X']\n        Y = target_data['Y']\n\n        if self.normalizer is not None:\n            Y = self.normalizer(Y)\n\n        if self.obj_model == None:\n            self.create_model(X, Y)\n            self.problem = EAProblem(self.search_space.config_space, self.predict)\n            self.create_ea()\n        else:\n            self.obj_model.set_XY(X, Y)\n\n        try:\n            self.obj_model.optimize_restarts(num_restarts=1, verbose=self.verbose, robust=True)\n        except np.linalg.LinAlgError as e:\n            # break\n            print('Error: np.linalg.LinAlgError')\n\n    def create_model(self, X, Y):\n        kern = GPy.kern.RBF(self.input_dim, ARD=True)\n        self.obj_model = GPy.models.GPRegression(X, Y, kernel=kern)\n        self.obj_model['Gaussian_noise.*variance'].constrain_bounded(1e-9, 1e-3)\n\n    def create_ea(self):\n        if self.ea_name == 'GA':\n            self.ea = GA(self.pop_num)\n        elif self.ea_name == 'DE':\n            self.ea = DE(self.pop_num)\n        elif self.ea_name == 'PSO':\n            self.ea = PSO(self.pop_num)\n        elif self.ea_name == 'CMAES':\n            self.ea = CMAES(self.pop_num)\n        self.ea.setup(self.problem, verbose=False)\n\n    def predict(self, X):\n        if X.ndim == 1:\n            X = X[None, :]\n\n        m, v = self.obj_model.predict(X)\n        return m, v\n\n    def sample(self, num_samples: int) -> List[Dict]:\n        if self.input_dim is None:\n            raise ValueError(\"Input dimension is not set. Call set_search_space() to set the input dimension.\")\n\n        temp = None\n        if self.init_method == 'latin':\n            temp = np.random.rand(num_samples, self.input_dim)\n            for i in range(self.input_dim):\n                temp[:, i] = (temp[:, i] + np.random.permutation(np.arange(num_samples))) / num_samples\n\n        samples = []\n        for i in range(num_samples):\n            sample = {}\n            for j, var_info in enumerate(self.search_space.config_space):\n                var_name = var_info['name']\n                var_domain = var_info['domain']\n                if self.init_method == 'random':\n                    value = np.random.uniform(var_domain[0], var_domain[1])\n                elif self.init_method == 'latin':\n                    value = temp[i][j] * (var_domain[1] - var_domain[0]) + var_domain[0]\n                sample[var_name] = value\n            samples.append(sample)\n\n        samples = self.inverse_transform(samples)\n        return samples\n\n    def model_reset(self):\n        self.obj_model = None\n\n    def get_fmin(self):\n        m, v = self.predict(self.obj_model.X)\n        return m.min()\n\n    def reset(self, task_name:str, design_space:Dict, search_sapce:Union[None, Dict] = None):\n        self.set_space(design_space, search_sapce)\n        self._X = np.empty((0,))  # Initializes an empty ndarray for input vectors\n        self._Y = np.empty((0,))\n        self._data_handler.reset_task(task_name, design_space)\n        self.sync_data(self._data_handler.get_input_vectors(), self._data_handler.get_output_value())\n        self.model_reset()\n\n    def model_manage_strategy(self):\n        self.ea.evaluator.eval(self.problem, self.pop)\n        pop_X = np.array([p.X for p in self.pop])\n        pop_F = np.array([p.F for p in self.pop])\n        if self.model_manage == 'best':\n            top_k_idx = sorted(range(len(pop_F)), key=lambda i: pop_F[i])[:self.k]\n            elites = self.pop_X[top_k_idx]\n        elif self.model_manage == 'pre-select':\n            total_pop_X = pop_X\n            total_pop_F = pop_F\n            for i in range(self.k - 1):\n                pop = self.ea.ask()\n                self.ea.evaluator.eval(self.problem, pop)\n                pop_X = np.array([p.X for p in pop])\n                pop_F = np.array([p.F for p in pop])\n                total_pop_X = np.concatenate((total_pop_X, pop_X))\n                total_pop_F = np.concatenate((total_pop_F, pop_F))\n            top_k_idx = sorted(range(len(total_pop_F)), key=lambda i: total_pop_F[i])[:self.ini_num]\n            elites = total_pop_X[top_k_idx]\n        elif self.model_manage == 'generation':\n            for i in range(self.k - 1):\n                pop = self.ea.ask()\n            self.ea.evaluator.eval(self.problem, pop)\n            pop_X = np.array([p.X for p in pop])\n            top_k_idx = range(len(pop_X))\n            elites = pop_X\n        else:\n            raise ValueError(f\"Invalid model manage strategy: {self.model_manage}\")\n        self.elites_idx = top_k_idx\n        return elites\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/SingleObjOptimizer/LFL.py",
    "content": "import numpy as np\nimport GPy\nfrom paramz import ObsAr\nfrom optimizer.acquisition_function.get_acf import get_ACF\nfrom transopt.optimizer.acquisition_function.sequential import Sequential\nfrom typing import Dict, Union, List\nfrom transopt.optimizer.optimizer_base import BOBase\nfrom transopt.utils.serialization import ndarray_to_vectors\nfrom agent.registry import optimizer_register\nfrom transopt.utils.Kernel import construct_multi_objective_kernel\nfrom transopt.optimizer.model.MPGP import MPGP\nfrom optimizer.model.GP_BAK import PriorGP\nfrom transopt.utils import Prior\n\nfrom GPy import util\nfrom GPy.inference.latent_function_inference import expectation_propagation\nfrom GPy.inference.latent_function_inference import ExactGaussianInference\nfrom GPy.likelihoods.multioutput_likelihood import MixedNoise\n\nfrom transopt.utils.Normalization import get_normalizer\n\n@optimizer_register('LFL')\nclass LFLOptimizer(BOBase):\n    def __init__(self, config:Dict, **kwargs):\n        super(LFLOptimizer, self).__init__(config=config)\n\n        self.init_method = 'LFL'\n        self.knowledge_num = 2\n        self.ini_quantile = 0.5\n        self.anchor_points = None\n        self.anchor_num = None\n        self.model = None\n        self.output_dim = None\n\n        if 'verbose' in config:\n            self.verbose = config['verbose']\n        else:\n            self.verbose = False\n\n        if 'init_number' in config:\n            self.ini_num = config['init_number']\n        else:\n            self.ini_num = None\n\n        if 'acf' in config:\n            self.acf = config['acf']\n        else:\n            self.acf = 'EI'\n\n\n    def reset(self, design_space:Dict, search_sapce:Union[None, Dict] = None):\n        self.set_space(design_space, search_sapce)\n        self.obj_model = None\n        self.var_model = None\n        self.output_dim = None\n        self.acqusition = get_ACF(self.acf, model=self, search_space=self.search_space, config=self.config)\n        self.evaluator = Sequential(self.acqusition)\n\n\n    def initial_sample(self):\n        if self.anchor_points is None:\n            self.anchor_num = int(self.ini_quantile * self.ini_num)\n            self.anchor_points  = self.random_sample(self.anchor_num)\n\n        random_samples = self.random_sample(self.ini_num - self.anchor_num)\n        samples = self.anchor_points.copy()\n        samples.extend(random_samples)\n\n        return samples\n\n    def random_sample(self, num_samples: int) -> List[Dict]:\n        \"\"\"\n        Initialize random samples.\n\n        :param num_samples: Number of random samples to generate\n        :return: List of dictionaries, each representing a random sample\n        \"\"\"\n        if self.input_dim is None:\n            raise ValueError(\"Input dimension is not set. Call set_search_space() to set the input dimension.\")\n\n        random_samples = []\n        for _ in range(num_samples):\n            sample = {}\n            for var_info in self.search_space.config_space:\n                var_name = var_info['name']\n                var_domain = var_info['domain']\n                # Generate a random floating-point number within the specified range\n                random_value = np.random.uniform(var_domain[0], var_domain[1])\n                sample[var_name] = random_value\n            random_samples.append(sample)\n\n        random_samples = self.inverse_transform(random_samples)\n        return random_samples\n\n\n\n    def combine_data(self):\n        if len(self.aux_data) == 0:\n            return {'Target':{'X':self._X, 'Y':self._Y}}\n        else:\n            return {}\n\n    def suggest(self, n_suggestions:Union[None, int] = None)->List[Dict]:\n        if self._X.size == 0:\n            suggests = self.initial_sample()\n            return suggests\n        elif self._X.shape[0] < self.ini_num:\n            pass\n        else:\n            if 'normalize' in self.config:\n                self.normalizer = get_normalizer(self.config['normalize'])\n\n            if self.aux_data is not None:\n                pass\n            else:\n                self.aux_data = {}\n\n            Data = self.combine_data()\n            self.update_model(Data)\n            suggested_sample, acq_value = self.evaluator.compute_batch(None, context_manager=None)\n            suggested_sample = self.search_space.zip_inputs(suggested_sample)\n            suggested_sample = ndarray_to_vectors(self._get_var_name('search'),suggested_sample)\n            design_suggested_sample = self.inverse_transform(suggested_sample)\n\n            return design_suggested_sample\n\n    def create_model(self, X_list, Y_list, mf=None, prior:list=[]):\n        X, Y, output_index = util.multioutput.build_XY(X_list, Y_list)\n\n        if self.output_dim > 1:\n            K = construct_multi_objective_kernel(self.input_dim, self.output_dim, base_kernel='RBF', Q=1, rank=2)\n            inference_method = ExactGaussianInference()\n            likelihoods_list = [GPy.likelihoods.Gaussian(name=\"Gaussian_noise_obj_%s\" % j) for y, j in\n                                zip(Y, range(self.output_dim))]\n            likelihood = MixedNoise(likelihoods_list=likelihoods_list)\n\n            self.obj_model = MPGP(X, Y, K, likelihood, Y_metadata={'output_index': output_index},\n                                  inference_method=inference_method, mean_function=mf, name=f'OBJ MPGP')\n\n            self.obj_model['mixed_noise.Gaussian_noise.*variance'].constrain_bounded(1e-9, 1e-3)\n            # self.obj_model['constmap.C'].constrain_fixed(0)\n            self.obj_model['ICM0.B.kappa'].constrain_fixed(np.zeros(shape=(self.output_dim,)))\n\n        else:\n            if 'kernel' in self.config:\n                kern = GPy.kern.RBF(self.input_dim, ARD=False)\n            else:\n                kern = GPy.kern.RBF(self.input_dim, ARD=False)\n            X = X_list[0]\n            Y = Y_list[0]\n\n            self.obj_model = PriorGP(X, Y, kernel=kern, mean_function = mf)\n            self.obj_model['Gaussian_noise.*variance'].constrain_bounded(1e-9, 1e-3)\n\n\n        if len(prior) == 0:\n            self.prior_list = []\n            self.prior_list.append(Prior.LogGaussian(1, 2, 'lengthscale'))\n            self.prior_list.append(Prior.LogGaussian(0.5, 2, 'variance'))\n        else:\n            self.prior_list = prior\n\n        for i in range(len(self.prior_list)):\n            self.obj_model.set_prior(self.prior_list[i])\n\n    def update_model(self, Data):\n        ## Train target model\n        assert 'Target' in Data\n        target_data = Data['Target']\n        X_list = []\n        Y_list = []\n\n        if 'History' in Data:\n            history_data = Data['History']\n            X_list.extend(list(history_data['X']))\n            Y_list.extend(list(history_data['Y']))\n            source_num = len(history_data['Y'])\n        else:\n            source_num = 0\n            history_data = {}\n\n        if 'Gym' in Data:\n            Gym_data = Data['Gym']\n            gym_num = len(Gym_data['Gym'])\n            X_list.extend(list(Gym_data['X']))\n            Y_list.extend(list(Gym_data['Y']))\n        else:\n            gym_num = 0\n            Gym_data = {}\n\n        output_dim = gym_num + source_num + 1\n\n        X_list.append(target_data['X'])\n        Y_list.append(target_data['Y'])\n\n        if self.normalizer is not None:\n            Y_list = self.normalizer(Y_list)\n\n        if self.output_dim != output_dim:\n            self.output_dim = output_dim\n            self.create_model(X_list, Y_list, prior=[])\n        else:\n            self.set_XY(X_list, Y_list)\n            if self.var_model is not None:\n                self.var_model.set_XY(target_data['X'][0], target_data['Y'][0])\n\n        try:\n            self.obj_model.optimize_restarts(messages=False, num_restarts=1,\n                                             verbose=self.verbose)\n            if self.var_model is not None:\n                self.var_model.optimize_restarts(messages=False, num_restarts=1,\n                                                verbose=self.verbose)\n        except np.linalg.linalg.LinAlgError as e:\n            # break\n            print('Error: np.linalg.linalg.LinAlgError')\n\n\n    def predict(self, X):\n        \"\"\"\n        Predictions with the model. Returns posterior means and standard deviations at X. Note that this is different in GPy where the variances are given.\n\n        Parameters:\n            X (np.ndarray) - points to run the prediction for.\n            with_noise (bool) - whether to add noise to the prediction. Default is True.\n        \"\"\"\n        if X.ndim == 1:\n            X = X[None,:]\n        task_id = self.output_dim - 1\n\n        if self.output_dim >1:\n            noise_dict  = {'output_index': np.array([task_id] * X.shape[0])[:,np.newaxis].astype(int)}\n            X = np.hstack((X, noise_dict['output_index']))\n\n            m, v = self.obj_model.predict(X, Y_metadata=noise_dict, full_cov=False, include_likelihood=True)\n            v = np.clip(v, 1e-10, np.inf)\n\n        else:\n            m, v = self.obj_model.predict(X)\n\n        # We can take the square root because v is just a diagonal matrix of variances\n        return m, v\n\n    def var_predict(self, X):\n        if X.ndim == 1:\n            X = X[None,:]\n        task_id = self.output_dim - 1\n\n        if self.model_name == 'MOGP':\n            noise_dict  = {'output_index': np.array([task_id] * X.shape[0])[:,np.newaxis].astype(int)}\n            X = np.hstack((X, noise_dict['output_index']))\n\n            _, v1 = self.var_model.predict(X)\n            v1 = np.clip(v1, 1e-10, np.inf)\n            v = v1\n        else:\n            m, v = self.obj_model.predict(X)\n\n        # We can take the square root because v is just a diagonal matrix of variances\n        return v\n\n    def obj_posterior_samples(self, X, sample_size):\n        if X.ndim == 1:\n            X = X[None,:]\n        task_id = self.output_dim - 1\n\n        if self.model_name == 'SHGP' or \\\n                self.model_name == 'HGP' or \\\n                self.model_name == 'MHGP' or \\\n                self.model_name == 'BHGP' or \\\n                self.model_name == 'RPGE':\n            samples_obj = self.posterior_samples(X, model_id=0,size=sample_size)\n        elif self.model_name == 'MOGP':\n            noise_dict = {'output_index': np.array([task_id] * X.shape[0])[:, np.newaxis].astype(int)}\n            X_zip = np.hstack((X, noise_dict['output_index']))\n\n            samples_obj = self.obj_model.posterior_samples(X_zip, size=sample_size, Y_metadata=noise_dict) # grid * 1 * sample_num\n\n        else:\n            raise NameError\n\n        return samples_obj\n\n    def get_fmin(self):\n        \"Get the minimum of the current model.\"\n        m, v = self.predict(self.obj_model.X)\n\n        return m.min()\n\n    def set_XY(self, X=None, Y=None):\n        if isinstance(X, list):\n            X, _, self.obj_model.output_index = util.multioutput.build_XY(X, None)\n        if isinstance(Y, list):\n            _, Y, self.obj_model.output_index = util.multioutput.build_XY(Y, Y)\n\n        self.obj_model.update_model(False)\n        if Y is not None:\n            self.obj_model.Y = ObsAr(Y)\n            self.obj_model.Y_normalized = self.obj_model.Y\n        if X is not None:\n            self.obj_model.X = ObsAr(X)\n\n        self.obj_model.Y_metadata = {'output_index': self.obj_model.output_index, 'trials': np.ones(self.obj_model.output_index.shape)}\n        if isinstance(self.obj_model.inference_method, expectation_propagation.EP):\n            self.obj_model.inference_method.reset()\n        self.obj_model.update_model(True)\n\n    def samples(self, gp):\n        \"\"\"\n        Returns a set of samples of observations based on a given value of the latent variable.\n\n        :param gp: latent variable\n        \"\"\"\n        orig_shape = gp.shape\n        gp = gp.flatten()\n        #orig_shape = gp.shape\n        gp = gp.flatten()\n        Ysim = np.array([np.random.normal(gpj, scale=np.sqrt(1e-2), size=1) for gpj in gp])\n        return Ysim.reshape(orig_shape)\n\n    def posterior_samples_f(self,X, model_id, size=10):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: The points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :returns: set of simulations\n        :rtype: np.ndarray (Nnew x D x samples)\n        \"\"\"\n        m, v = self.obj_model.predict(X, return_full=True)\n\n        def sim_one_dim(m, v):\n            return np.random.multivariate_normal(m, v, size).T\n\n        return sim_one_dim(m.flatten(), v)[:, np.newaxis, :]\n\n\n    def posterior_samples(self, X, model_id, size=10):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: the points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim.)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :param noise_model: for mixed noise likelihood, the noise model to use in the samples.\n        :type noise_model: integer.\n        :returns: Ysim: set of simulations,\n        :rtype: np.ndarray (D x N x samples) (if D==1 we flatten out the first dimension)\n        \"\"\"\n        fsim = self.posterior_samples_f(X, model_id=model_id, size=size)\n\n        if fsim.ndim == 3:\n            for d in range(fsim.shape[1]):\n                fsim[:, d] = self.samples(fsim[:, d])\n        else:\n            fsim = self.samples(fsim)\n        return fsim\n\n    def get_model_para(self):\n\n        if self.model_name == 'MOGP':\n            lengthscale = self.obj_model['.*lengthscale'][0]\n            variance = self.obj_model['.*rbf.*variance'][0]\n        else:\n            lengthscale = self.obj_model['rbf.*lengthscale'][0]\n            variance = self.obj_model['rbf.*variance'][0]\n\n        return lengthscale, variance\n\n    def update_prior(self, parameters):\n        for k, v in parameters.items():\n            prior = self.obj_model.get_prior(k)\n            cur_stat = prior.getstate()\n            # mu = (self.kappa * cur_stat[0] + v) / (self.kappa + 1)\n            # var = cur_stat[1] + (self.kappa * (v - cur_stat[0]) ** 2) / (2.0 * (self.kappa + 1.0))\n            mu = np.mean(parameters[k])\n            var = np.var(parameters[k])\n\n            self.obj_model.update_prior(k, [mu, var])"
  },
  {
    "path": "transopt/optimizer/SingleObjOptimizer/MetaLearningOptimizer.py",
    "content": "import numpy as np\nimport GPy\nimport GPyOpt\nfrom GPy import util\nfrom paramz import ObsAr\n\nfrom GPy.inference.latent_function_inference import expectation_propagation\nfrom transopt.optimizer.optimizer_base import OptimizerBase\n# from Model.HyperBO import hyperbo\n\nfrom external.transfergpbo import models\n\nfrom emukit.core import ContinuousParameter\nfrom emukit.core import ParameterSpace\n\nfrom external.FSBO.fsbo_modules import FSBO, DeepKernelGP\nfrom external.FSBO.fsbo_utils import totorch\nimport os\n\nfrom external.transfergpbo.models import (\n    WrapperBase,\n    MHGP,\n    SHGP,\n    BHGP,\n)\n\n\ndef get_model(\n    model_name: str, space: ParameterSpace\n) -> WrapperBase:\n    \"\"\"Create the model object.\"\"\"\n    model_class = getattr(models, model_name)\n    if model_class == MHGP or model_class == SHGP or model_class == BHGP:\n        model = model_class(space.dimensionality)\n    else:\n        kernel = GPy.kern.RBF(space.dimensionality)\n        model = model_class(kernel=kernel)\n    model = WrapperBase(model)\n\n    return model\n\nclass MetaBOOptimizer(OptimizerBase):\n    analytical_gradient_prediction = False  # --- Needed in all models to check is the gradients of acquisitions are computable.\n\n    def __init__(self, Xdim, bounds, kernel='RBF', likelihood=None, model_name='MOGP', acf_name='EI',\n                 optimizer='bfgs',  verbose=True, seed = 0):\n        self.kernel = kernel\n        self.likelihood = likelihood\n        self.Xdim = Xdim\n        self.bounds = bounds\n        self.acf_name = acf_name\n        self.Seed = seed\n        self.name = 'meta'\n\n        # Set decision space\n        Variables = []\n        task_design_space = []\n        for var in range(Xdim):\n            v_n = f'x{var + 1}'\n            Variables.append(ContinuousParameter(v_n, self.bounds[0][var], self.bounds[1][var]))\n            var_dic = {'name': f'var_{var}', 'type': 'continuous',\n                       'domain': tuple([self.bounds[0][var], self.bounds[1][var]])}\n            task_design_space.append(var_dic.copy())\n        self.model_space = ParameterSpace(Variables)\n        self.acf_space = GPyOpt.Design_space(space=task_design_space)\n\n        self.optimizer = optimizer\n        self.verbose = verbose\n\n\n    def create_model(self, model_name, Meta_data, Target_data):\n        self.model_name = model_name\n        source_num = len(Meta_data['Y'])\n        self.output_dim = source_num + 1\n\n        ###Construct objective model\n        if self.model_name == 'HyperBO':\n            self.obj_model = hyperbo()\n            self.obj_model.pretrain(Meta_data, Target_data)\n\n        elif self.model_name == 'FSBO':\n            checkpoint_path = './External/FSBO/checkpoints/'\n            self.training_model = FSBO(input_size=self.Xdim, checkpoint_path = checkpoint_path, batch_size=len(Meta_data['X'][0]))\n            train_data = {}\n            for i in range(source_num):\n                train_data[i] = {'X':Meta_data['X'][i], 'y':Meta_data['Y'][i]}\n            self.training_model.set_data(train_data=train_data)\n            self.training_model.meta_train(epochs=1000)\n            log_dir = os.path.join(checkpoint_path, \"log.txt\"),\n            self.obj_model = DeepKernelGP(epochs = 1000, input_size=self.Xdim, checkpoint = checkpoint_path + f'Seed_{self.Seed}_{source_num+1}', log_dir= log_dir, seed=self.Seed)\n            self.device = 'cpu'\n            self.obj_model.X_obs, self.obj_model.y_obs = totorch(Target_data['X'], self.device), totorch(Target_data['Y'], self.device).reshape(-1)\n            self.obj_model.train()\n\n        else:\n            if self.kernel == None or self.kernel == 'RBF':\n                kern = GPy.kern.RBF(self.Xdim, ARD=True)\n            else:\n                kern = GPy.kern.RBF(self.Xdim, ARD=True)\n            X = Target_data['X']\n            Y = Target_data['Y']\n\n            self.obj_model = GPy.models.GPRegression(X, Y, kernel=kern)\n            self.obj_model['Gaussian_noise.*variance'].constrain_bounded(1e-9, 1e-3)\n            try:\n                self.obj_model.optimize_restarts(messages=True, num_restarts=1, verbose=self.verbose)\n            except np.linalg.linalg.LinAlgError as e:\n                # break\n                print('Error: np.linalg.linalg.LinAlgError')\n\n\n    def updateModel(self, Target_data):\n        ###Construct objective model\n        if self.model_name == 'HyperBO':\n            self.obj_model.retrain(Target_data)\n        elif self.model_name == 'FSBO':\n            self.obj_model.X_obs, self.obj_model.y_obs = totorch(Target_data['X'], self.device), totorch(\n                Target_data['Y'], self.device).reshape(-1)\n            self.obj_model.train()\n        else:\n            X = Target_data['X']\n            Y = Target_data['Y']\n            self.obj_model.set_XY(X, Y)\n            try:\n                self.obj_model.optimize_restarts(messages=True, num_restarts=1,\n                                             verbose=self.verbose)\n            except np.linalg.linalg.LinAlgError as e:\n                # break\n                print('Error: np.linalg.linalg.LinAlgError')\n\n    def resetModel(self, Source_data, Target_data):\n        ## Train target model\n        pass\n\n\n\n    def predict(self, X):\n        \"\"\"\n        Predictions with the model. Returns posterior means and standard deviations at X. Note that this is different in GPy where the variances are given.\n\n        Parameters:\n            X (np.ndarray) - points to run the prediction for.\n            with_noise (bool) - whether to add noise to the prediction. Default is True.\n        \"\"\"\n\n        if self.model_name == 'HyperBO':\n            m, v = self.obj_model.predict(X)\n            m = np.array(m)\n            v = np.array(v)\n\n        elif self.model_name == 'FSBO':\n            X = totorch(X, self.device)\n            m,v = self.obj_model.predict(X)\n            m = m[:,np.newaxis]\n            v = v[:,np.newaxis]\n        else:\n            m, v = self.obj_model.predict(X)\n\n        # We can take the square root because v is just a diagonal matrix of variances\n        return m, v\n\n\n    #\n    # def obj_posterior_samples(self, X, sample_size):\n    #     if X.ndim == 1:\n    #         X = X[None,:]\n    #     task_id = self.output_dim - 1\n    #\n    #     if self.model_name == 'WSGP' or \\\n    #             self.model_name == 'HGP':\n    #         samples_obj = self.posterior_samples(X, model_id=0,size=sample_size)\n    #     elif self.model_name == 'MOGP':\n    #         noise_dict = {'output_index': np.array([task_id] * X.shape[0])[:, np.newaxis].astype(int)}\n    #         X_zip = np.hstack((X, noise_dict['output_index']))\n    #\n    #         samples_obj = self.obj_model.posterior_samples(X_zip, size=sample_size, Y_metadata=noise_dict) # grid * 1 * sample_num\n    #\n    #     else:\n    #         raise NameError\n    #\n    #     return samples_obj\n\n    def get_fmin(self):\n        \"Get the minimum of the current model.\"\n        if self.model_name == 'HyperBO':\n            m = np.array(self.obj_model._Y)\n            return np.min(m)\n        elif self.model_name == 'FSBO':\n            m = self.obj_model.y_obs.detach().to(\"cpu\").numpy().reshape(-1,)\n            return np.min(m)\n        else:\n            m, v = self.predict(self.obj_model.X)\n\n        return m.min()\n\n    def set_XY(self, X=None, Y=None):\n        if isinstance(X, list):\n            X, _, self.obj_model.output_index = util.multioutput.build_XY(X, None)\n        if isinstance(Y, list):\n            _, Y, self.obj_model.output_index = util.multioutput.build_XY(Y, Y)\n\n        self.obj_model.update_model(False)\n        if Y is not None:\n            self.obj_model.Y = ObsAr(Y)\n            self.obj_model.Y_normalized = self.obj_model.Y\n        if X is not None:\n            self.obj_model.X = ObsAr(X)\n\n        self.obj_model.Y_metadata = {'output_index': self.obj_model.output_index, 'trials': np.ones(self.obj_model.output_index.shape)}\n        if isinstance(self.obj_model.inference_method, expectation_propagation.EP):\n            self.obj_model.inference_method.reset()\n        self.obj_model.update_model(True)\n\n    def samples(self, gp):\n        \"\"\"\n        Returns a set of samples of observations based on a given value of the latent variable.\n\n        :param gp: latent variable\n        \"\"\"\n        orig_shape = gp.shape\n        gp = gp.flatten()\n        #orig_shape = gp.shape\n        gp = gp.flatten()\n        Ysim = np.array([np.random.normal(gpj, scale=np.sqrt(1e-2), size=1) for gpj in gp])\n        return Ysim.reshape(orig_shape)\n\n    def posterior_samples_f(self,X, model_id, size=10):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: The points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :returns: set of simulations\n        :rtype: np.ndarray (Nnew x D x samples)\n        \"\"\"\n        m, v = self.obj_model.predict(X, return_full=True)\n\n        def sim_one_dim(m, v):\n            return np.random.multivariate_normal(m, v, size).T\n\n        return sim_one_dim(m.flatten(), v)[:, np.newaxis, :]\n\n\n    def posterior_samples(self, X, model_id, size=10):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: the points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim.)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :param noise_model: for mixed noise likelihood, the noise model to use in the samples.\n        :type noise_model: integer.\n        :returns: Ysim: set of simulations,\n        :rtype: np.ndarray (D x N x samples) (if D==1 we flatten out the first dimension)\n        \"\"\"\n        fsim = self.posterior_samples_f(X, model_id=model_id, size=size)\n\n        if fsim.ndim == 3:\n            for d in range(fsim.shape[1]):\n                fsim[:, d] = self.samples(fsim[:, d])\n        else:\n            fsim = self.samples(fsim)\n        return fsim"
  },
  {
    "path": "transopt/optimizer/SingleObjOptimizer/MultitaskOptimizer.py",
    "content": "import numpy as np\nimport GPy\nfrom typing import Dict, Union, List\nfrom transopt.optimizer.optimizer_base import BOBase\nfrom transopt.utils.serialization import ndarray_to_vectors\nfrom agent.registry import optimizer_register\nfrom paramz import ObsAr\nfrom transopt.utils.Normalization import get_normalizer\nfrom GPy import util\nfrom transopt.utils.Kernel import construct_multi_objective_kernel\nfrom GPy.inference.latent_function_inference import expectation_propagation\nfrom GPy.inference.latent_function_inference import ExactGaussianInference\nfrom GPy.likelihoods.multioutput_likelihood import MixedNoise\nfrom transopt.optimizer.model.MPGP import MPGP\n\n@optimizer_register('MTBO')\nclass MultitaskBO(BOBase):\n    def __init__(self, config:Dict, **kwargs):\n        super(MultitaskBO, self).__init__(config=config)\n        self.init_method = 'Random'\n        self.model = None\n\n        if 'verbose' in config:\n            self.verbose = config['verbose']\n        else:\n            self.verbose = True\n\n        if 'init_number' in config:\n            self.ini_num = config['init_number']\n        else:\n            self.ini_num = None\n\n        if 'acf' in config:\n            self.acf = config['acf']\n        else:\n            self.acf = 'EI'\n\n\n    def initial_sample(self):\n        return self.random_sample(self.ini_num)\n\n    def random_sample(self, num_samples: int) -> List[Dict]:\n        \"\"\"\n        Initialize random samples.\n\n        :param num_samples: Number of random samples to generate\n        :return: List of dictionaries, each representing a random sample\n        \"\"\"\n        if self.input_dim is None:\n            raise ValueError(\"Input dimension is not set. Call set_search_space() to set the input dimension.\")\n\n        random_samples = []\n        for _ in range(num_samples):\n            sample = {}\n            for var_info in self.search_space.config_space:\n                var_name = var_info['name']\n                var_domain = var_info['domain']\n                # Generate a random floating-point number within the specified range\n                random_value = np.random.uniform(var_domain[0], var_domain[1])\n                sample[var_name] = random_value\n            random_samples.append(sample)\n\n        random_samples = self.inverse_transform(random_samples)\n        return random_samples\n\n    def suggest(self, n_suggestions:Union[None, int] = None) ->List[Dict]:\n        if self._X.size == 0:\n            suggests = self.initial_sample()\n            return suggests\n        elif self._X.shape[0] < self.ini_num:\n            pass\n        else:\n            if 'normalize' in self.config:\n                self.normalizer = get_normalizer(self.config['normalize'])\n\n\n            if len(self.aux_data):\n                Data = self.aux_data\n            else:\n                Data = {}\n            Data['Target'] = {'X':self._X, 'Y':self._Y}\n            self.update_model(Data)\n            suggested_sample, acq_value = self.evaluator.compute_batch(None, context_manager=None)\n            suggested_sample = self.search_space.zip_inputs(suggested_sample)\n            suggested_sample = ndarray_to_vectors(self._get_var_name('search'), suggested_sample)\n            design_suggested_sample = self.inverse_transform(suggested_sample)\n\n            return design_suggested_sample\n\n    def update_model(self, Data):\n        assert 'Target' in Data\n        X_list = []\n        Y_list = []\n\n        if 'History' in Data:\n            history_data = Data['History']\n            X_list.extend(list(history_data['X']))\n            Y_list.extend(list(history_data['Y']))\n\n\n        target_data = Data['Target']\n        X_list.append(target_data['X'])\n        Y_list.append(target_data['Y'])\n\n        if self.normalizer is not None:\n            Y_list = self.normalizer(Y_list)\n\n        self.output_dim = len(Y_list)\n        self.task_id = self.output_dim - 1\n\n        if self.obj_model == None:\n            self.create_model(X_list, Y_list)\n        else:\n            if self.output_dim > 1:\n                self.set_XY(X_list, Y_list)\n            else:\n                self.obj_model.set_XY(X_list[0], Y_list[0])\n\n        try:\n            self.obj_model.optimize_restarts(num_restarts=1, verbose=self.verbose, robust=True)\n        except np.linalg.linalg.LinAlgError as e:\n            # break\n            print('Error: np.linalg.linalg.LinAlgError')\n\n    def create_model(self, X_list, Y_list, mf=None, prior:list=[]):\n        if self.output_dim > 1:\n            X, Y, output_index = util.multioutput.build_XY(X_list, Y_list)\n\n            #Set inference Method\n            inference_method = ExactGaussianInference()\n            ## Set likelihood\n            likelihoods_list = [GPy.likelihoods.Gaussian(name=\"Gaussian_noise_obj_%s\" % j) for y, j in\n                                zip(Y, range(self.output_dim))]\n            likelihood = MixedNoise(likelihoods_list=likelihoods_list)\n\n            kernel = construct_multi_objective_kernel(self.input_dim, output_dim=self.output_dim, base_kernel='RBF', rank=self.output_dim)\n            self.obj_model = MPGP(X, Y, kernel, likelihood, Y_metadata={'output_index': output_index}, inference_method=inference_method, name=f'OBJ MPGP')\n\n        else:\n            if 'kernel' in self.config:\n                kern = GPy.kern.RBF(self.input_dim, ARD=False)\n            else:\n                kern = GPy.kern.RBF(self.input_dim, ARD=False)\n            X = X_list[0]\n            Y = Y_list[0]\n\n            self.obj_model = GPy.models.GPRegression(X, Y, kernel=kern)\n            self.obj_model['Gaussian_noise.*variance'].constrain_bounded(1e-9, 1e-3)\n\n\n    def set_XY(self, X=None, Y=None):\n        if isinstance(X, list):\n            X, _, self.obj_model.output_index = util.multioutput.build_XY(X, None)\n        if isinstance(Y, list):\n            _, Y, self.obj_model.output_index = util.multioutput.build_XY(Y, Y)\n\n        self.obj_model.update_model(False)\n        if Y is not None:\n            self.obj_model.Y = ObsAr(Y)\n            self.obj_model.Y_normalized = self.obj_model.Y\n        if X is not None:\n            self.obj_model.X = ObsAr(X)\n\n        self.obj_model.Y_metadata = {'output_index': self.obj_model.output_index, 'trials': np.ones(self.obj_model.output_index.shape)}\n        if isinstance(self.obj_model.inference_method, expectation_propagation.EP):\n            self.obj_model.inference_method.reset()\n        self.obj_model.update_model(True)\n\n    def model_reset(self):\n        self.obj_model = None\n\n    def predict(self, X):\n        \"\"\"\n        Predictions with the model. Returns posterior means and standard deviations at X. Note that this is different in GPy where the variances are given.\n\n        Parameters:\n            X (np.ndarray) - points to run the prediction for.\n            with_noise (bool) - whether to add noise to the prediction. Default is True.\n        \"\"\"\n        if X.ndim == 1:\n            X = X[None,:]\n\n        if self.output_dim > 1:\n            noise_dict  = {'output_index': np.array([self.task_id] * X.shape[0])[:,np.newaxis].astype(int)}\n            X = np.hstack((X, noise_dict['output_index']))\n\n            m, v = self.obj_model.predict(X, Y_metadata=noise_dict, full_cov=False, include_likelihood=True)\n            v = np.clip(v, 1e-10, np.inf)\n\n        else:\n            m, v = self.obj_model.predict(X)\n\n        # We can take the square root because v is just a diagonal matrix of variances\n        return m, v\n\n    def get_fmin(self):\n        \"Get the minimum of the current model.\"\n        m, v = self.predict(self.obj_model.X)\n\n        return m.min()"
  },
  {
    "path": "transopt/optimizer/SingleObjOptimizer/PROptimizer.py",
    "content": "import GPy\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom pymoo.algorithms.soo.nonconvex.ga import GA\nfrom pymoo.algorithms.soo.nonconvex.de import DE\nfrom pymoo.algorithms.soo.nonconvex.cmaes import CMAES\nfrom pymoo.algorithms.soo.nonconvex.pso import PSO\nfrom typing import Dict, Union, List\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom transopt.optimizer.optimizer_base import BOBase\nfrom transopt.utils.serialization import vectors_to_ndarray, output_to_ndarray\nfrom transopt.utils.serialization import ndarray_to_vectors\nfrom agent.registry import optimizer_register\nfrom transopt.utils.Normalization import get_normalizer\n\n\n@optimizer_register('PREA')\nclass PREA(BayesianOptimizerBase):\n    def __init__(self, config: Dict, **kwargs):\n        super(PREA, self).__init__(config=config)\n\n        self.init_method = 'latin'\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n        if 'verbose' in config:\n            self.verbose = config['verbose']\n        else:\n            self.verbose = True\n\n        if 'init_number' in config:\n            self.ini_num = config['init_number']\n        else:\n            self.ini_num = None\n\n        if 'ea' in config:\n            self.ea_name = config['ea']\n        else:\n            self.ea_name = 'GA'\n\n        if 'degree' in config:\n            self.degree = config['degree']\n        else:\n            self.degree = 10\n\n        # model_manage: 'best' or 'pre-select' or 'generation'\n        if 'model_manage' in config:\n            self.model_manage = config['model_manage']\n        else:\n            self.model_manage = 'best'\n\n        # 'best':k best individual, 'pre-select' and 'generation': every k generation\n        if 'k' in config:\n            self.k = config['k']\n        else:\n            self.k = 1\n\n        self.pop = None\n        self.pop_num = self.ini_num\n\n    def initial_sample(self):\n        return self.sample(self.ini_num)\n\n    def suggest(self, n_suggestions: Union[None, int] = None) -> List[Dict]:\n        if self._X.size == 0:\n            suggests = self.initial_sample()\n            return suggests\n        else:\n            if 'normalize' in self.config:\n                self.normalizer = get_normalizer(self.config['normalize'])\n\n            Data = {'Target': {'X': self._X, 'Y': self._Y}}\n            self.update_model(Data)\n            self.problem = EAProblem(self.search_space.config_space, self.predict)\n            # 得到新的种群\n            self.pop = self.ea.ask()\n            # 模型管理策略，选择需要准确评估的个体\n            elites = self.model_manage_strategy().reshape(-1, self.input_dim)\n            # 准确评估优秀个体\n            suggested_sample = self.search_space.zip_inputs(elites)\n            suggested_sample = ndarray_to_vectors(self._get_var_name('search'), suggested_sample)\n            design_suggested_sample = self.inverse_transform(suggested_sample)\n\n            return design_suggested_sample\n\n    def observe(self, input_vectors: Union[List[Dict], Dict], output_value: Union[List[Dict], Dict]) -> None:\n        self._data_handler.add_observation(input_vectors, output_value)\n\n        # Convert dict to list of dict\n        if isinstance(input_vectors, Dict):\n            input_vectors = [input_vectors]\n        if isinstance(output_value, Dict):\n            output_value = [output_value]\n\n        # Check if the lists are empty and return if they are\n        if len(input_vectors) == 0 and len(output_value) == 0:\n            return\n\n        self._validate_observation('design', input_vectors=input_vectors, output_value=output_value)\n        X = self.transform(input_vectors)\n\n        self._X = np.vstack(\n            (self._X, vectors_to_ndarray(self._get_var_name('search'), X))) if self._X.size else vectors_to_ndarray(\n            self._get_var_name('search'), X)\n        self._Y = np.vstack((self._Y, output_to_ndarray(output_value))) if self._Y.size else output_to_ndarray(\n            output_value)\n\n        if self.pop is not None:\n            self.pop[self.elites_idx].F = output_value\n            # 将 pop 返回给 EA\n            self.ea.tell(infills=self.pop)\n\n    def update_model(self, Data):\n        assert 'Target' in Data\n        target_data = Data['Target']\n        X = target_data['X']\n        Y = target_data['Y']\n\n        if self.normalizer is not None:\n            Y = self.normalizer(Y)\n\n        if self.obj_model is None:\n            self.create_model(X, Y)\n            self.problem = EAProblem(self.search_space.config_space, self.predict)\n            self.create_ea()\n        else:\n            X_poly = self.poly_features.fit_transform(X)\n            self.obj_model.fit(X_poly, Y)\n\n    def create_model(self, X, Y):\n        self.poly_features = PolynomialFeatures(self.degree)\n        X_poly = self.poly_features.fit_transform(X)\n        self.obj_model = LinearRegression()\n        self.obj_model.fit(X_poly, Y)\n\n    def create_ea(self):\n        if self.ea_name == 'GA':\n            self.ea = GA(self.pop_num)\n        elif self.ea_name == 'DE':\n            self.ea = DE(self.pop_num)\n        elif self.ea_name == 'PSO':\n            self.ea = PSO(self.pop_num)\n        elif self.ea_name == 'CMAES':\n            self.ea = CMAES(self.pop_num)\n        self.ea.setup(self.problem, verbose=False)\n\n    def predict(self, X):\n        if X.ndim == 1:\n            X = X[None, :]\n\n        X_poly = self.poly_features.transform(X)\n        Y = self.obj_model.predict(X_poly)\n        return Y, None\n\n    def sample(self, num_samples: int) -> List[Dict]:\n        if self.input_dim is None:\n            raise ValueError(\"Input dimension is not set. Call set_search_space() to set the input dimension.\")\n\n        temp = None\n        if self.init_method == 'latin':\n            temp = np.random.rand(num_samples, self.input_dim)\n            for i in range(self.input_dim):\n                temp[:, i] = (temp[:, i] + np.random.permutation(np.arange(num_samples))) / num_samples\n\n        samples = []\n        for i in range(num_samples):\n            sample = {}\n            for j, var_info in enumerate(self.search_space.config_space):\n                var_name = var_info['name']\n                var_domain = var_info['domain']\n                if self.init_method == 'random':\n                    value = np.random.uniform(var_domain[0], var_domain[1])\n                elif self.init_method == 'latin':\n                    value = temp[i][j] * (var_domain[1] - var_domain[0]) + var_domain[0]\n                sample[var_name] = value\n            samples.append(sample)\n\n        samples = self.inverse_transform(samples)\n        return samples\n\n    def model_reset(self):\n        self.obj_model = None\n\n    def get_fmin(self):\n        m, _ = self.predict(self._X)\n        return m.min()\n\n    def reset(self, task_name: str, design_space: Dict, search_sapce: Union[None, Dict] = None):\n        self.set_space(design_space, search_sapce)\n        self._X = np.empty((0,))  # Initializes an empty ndarray for input vectors\n        self._Y = np.empty((0,))\n        self._data_handler.reset_task(task_name, design_space)\n        self.sync_data(self._data_handler.get_input_vectors(), self._data_handler.get_output_value())\n        self.model_reset()\n\n    def model_manage_strategy(self):\n        self.ea.evaluator.eval(self.problem, self.pop)\n        pop_X = np.array([p.X for p in self.pop])\n        pop_F = np.array([p.F for p in self.pop])\n        if self.model_manage == 'best':\n            top_k_idx = sorted(range(len(pop_F)), key=lambda i: pop_F[i])[:self.k]\n            elites = self.pop_X[top_k_idx]\n        elif self.model_manage == 'pre-select':\n            total_pop_X = pop_X\n            total_pop_F = pop_F\n            for i in range(self.k - 1):\n                pop = self.ea.ask()\n                self.ea.evaluator.eval(self.problem, pop)\n                pop_X = np.array([p.X for p in pop])\n                pop_F = np.array([p.F for p in pop])\n                total_pop_X = np.concatenate((total_pop_X, pop_X))\n                total_pop_F = np.concatenate((total_pop_F, pop_F))\n            top_k_idx = sorted(range(len(total_pop_F)), key=lambda i: total_pop_F[i])[:self.ini_num]\n            elites = total_pop_X[top_k_idx]\n        elif self.model_manage == 'generation':\n            for i in range(self.k - 1):\n                pop = self.ea.ask()\n            self.ea.evaluator.eval(self.problem, pop)\n            pop_X = np.array([p.X for p in pop])\n            top_k_idx = range(len(pop_X))\n            elites = pop_X\n        else:\n            raise ValueError(f\"Invalid model manage strategy: {self.model_manage}\")\n        self.elites_idx = top_k_idx\n        return elites\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)\n"
  },
  {
    "path": "transopt/optimizer/SingleObjOptimizer/RBFNOptimizer.py",
    "content": "import GPy\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom pymoo.algorithms.soo.nonconvex.ga import GA\nfrom pymoo.algorithms.soo.nonconvex.de import DE\nfrom pymoo.algorithms.soo.nonconvex.cmaes import CMAES\nfrom pymoo.algorithms.soo.nonconvex.pso import PSO\nfrom typing import Dict, Union, List\n\nfrom transopt.optimizer.optimizer_base import BOBase\nfrom transopt.utils.serialization import vectors_to_ndarray, output_to_ndarray\nfrom transopt.utils.serialization import ndarray_to_vectors\nfrom agent.registry import optimizer_register\nfrom transopt.utils.Normalization import get_normalizer\nfrom transopt.optimizer.model.RBFN import RBFN, RegressionDataset\n\n\n@optimizer_register('RbfnEA')\nclass RbfnEA(BayesianOptimizerBase):\n    def __init__(self, config: Dict, **kwargs):\n        super(RbfnEA, self).__init__(config=config)\n\n        self.init_method = 'latin'\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n        if 'verbose' in config:\n            self.verbose = config['verbose']\n        else:\n            self.verbose = True\n\n        if 'init_number' in config:\n            self.ini_num = config['init_number']\n        else:\n            self.ini_num = None\n\n        if 'ea' in config:\n            self.ea_name = config['ea']\n        else:\n            self.ea_name = 'GA'\n\n        if 'max_epoch' in config:\n            self.max_epoch = config['max_epoch']\n        else:\n            self.max_epoch = 10\n\n        if 'batch_size' in config:\n            self.batch_size = config['batch_size']\n        else:\n            self.batch_size = 1\n\n        if 'lr' in config:\n            self.lr = config['lr']\n        else:\n            self.lr = 0.01\n\n        if 'num_centers' in config:\n            self.num_centers = config['num_centers']\n        else:\n            self.num_centers = 10\n        \n        # model_manage: 'best' or 'pre-select' or 'generation'\n        if 'model_manage' in config:\n            self.model_manage = config['model_manage']\n        else:\n            self.model_manage = 'best'\n\n        # 'best':k best individual, 'pre-select' and 'generation': every k generation\n        if 'k' in config:\n            self.k = config['k']\n        else:\n            self.k = 1\n\n        self.pop = None\n        self.pop_num = self.ini_num\n\n    def initial_sample(self):\n        return self.sample(self.ini_num)\n\n    def suggest(self, n_suggestions: Union[None, int] = None) -> List[Dict]:\n        if self._X.size == 0:\n            suggests = self.initial_sample()\n            return suggests\n        else:\n            if 'normalize' in self.config:\n                self.normalizer = get_normalizer(self.config['normalize'])\n\n            Data = {'Target': {'X': self._X, 'Y': self._Y}}\n            self.update_model(Data)\n            self.problem = EAProblem(self.search_space.config_space, self.predict)\n            # 得到新的种群\n            self.pop = self.ea.ask()\n            # 模型管理策略，选择需要准确评估的个体\n            elites = self.model_manage_strategy().reshape(-1, self.input_dim)\n            # 准确评估优秀个体\n            suggested_sample = self.search_space.zip_inputs(elites)\n            suggested_sample = ndarray_to_vectors(self._get_var_name('search'), suggested_sample)\n            design_suggested_sample = self.inverse_transform(suggested_sample)\n\n            return design_suggested_sample\n\n    def observe(self, input_vectors: Union[List[Dict], Dict], output_value: Union[List[Dict], Dict]) -> None:\n        self._data_handler.add_observation(input_vectors, output_value)\n\n        # Convert dict to list of dict\n        if isinstance(input_vectors, Dict):\n            input_vectors = [input_vectors]\n        if isinstance(output_value, Dict):\n            output_value = [output_value]\n\n        # Check if the lists are empty and return if they are\n        if len(input_vectors) == 0 and len(output_value) == 0:\n            return\n\n        self._validate_observation('design', input_vectors=input_vectors, output_value=output_value)\n        X = self.transform(input_vectors)\n\n        self._X = np.vstack((self._X, vectors_to_ndarray(self._get_var_name('search'), X))) if self._X.size else vectors_to_ndarray(self._get_var_name('search'), X)\n        self._Y = np.vstack((self._Y, output_to_ndarray(output_value))) if self._Y.size else output_to_ndarray(output_value)\n\n        if self.pop is not None:\n            self.pop[self.elites_idx].F = output_value\n            # 将 pop 返回给 EA\n            self.ea.tell(infills=self.pop)\n\n    def update_model(self, Data):\n        assert 'Target' in Data\n        target_data = Data['Target']\n        X = target_data['X']\n        Y = target_data['Y']\n\n        if self.normalizer is not None:\n            Y = self.normalizer(Y)\n\n        if self.obj_model is None:\n            self.create_model(X, Y)\n            self.problem = EAProblem(self.search_space.config_space, self.predict)\n            self.create_ea()\n        else:\n            dataset = RegressionDataset(torch.from_numpy(X), torch.from_numpy(Y))\n            self.obj_model.update_dataset(dataset)\n\n        try:\n            self.obj_model.train()\n        except np.linalg.LinAlgError as e:\n            # break\n            print('Error: np.linalg.LinAlgError')\n\n    def create_model(self, X, Y):\n        dataset = RegressionDataset(torch.from_numpy(X), torch.from_numpy(Y))\n        self.obj_model = RBFN(dataset=dataset,\n                              max_epoch=self.max_epoch,\n                              batch_size=self.batch_size,\n                              lr=self.lr,\n                              num_centers=self.num_centers)\n\n    def create_ea(self):\n        if self.ea_name == 'GA':\n            self.ea = GA(self.pop_num)\n        elif self.ea_name == 'DE':\n            self.ea = DE(self.pop_num)\n        elif self.ea_name == 'PSO':\n            self.ea = PSO(self.pop_num)\n        elif self.ea_name == 'CMAES':\n            self.ea = CMAES(self.pop_num)\n        self.ea.setup(self.problem, verbose=False)\n\n    def predict(self, X):\n        if X.ndim == 1:\n            X = X[None, :]\n\n        Y = self.obj_model.predict(X)\n        return Y, None\n\n    def sample(self, num_samples: int) -> List[Dict]:\n        if self.input_dim is None:\n            raise ValueError(\"Input dimension is not set. Call set_search_space() to set the input dimension.\")\n\n        temp = None\n        if self.init_method == 'latin':\n            temp = np.random.rand(num_samples, self.input_dim)\n            for i in range(self.input_dim):\n                temp[:, i] = (temp[:, i] + np.random.permutation(np.arange(num_samples))) / num_samples\n\n        samples = []\n        for i in range(num_samples):\n            sample = {}\n            for j, var_info in enumerate(self.search_space.config_space):\n                var_name = var_info['name']\n                var_domain = var_info['domain']\n                if self.init_method == 'random':\n                    value = np.random.uniform(var_domain[0], var_domain[1])\n                elif self.init_method == 'latin':\n                    value = temp[i][j] * (var_domain[1] - var_domain[0]) + var_domain[0]\n                sample[var_name] = value\n            samples.append(sample)\n\n        samples = self.inverse_transform(samples)\n        return samples\n\n    def model_reset(self):\n        self.obj_model = None\n\n    def get_fmin(self):\n        X = self.obj_model.dataset.inputs.numpy()\n        m, _ = self.predict(X)\n        return m.min()\n\n    def reset(self, task_name: str, design_space: Dict, search_sapce: Union[None, Dict] = None):\n        self.set_space(design_space, search_sapce)\n        self._X = np.empty((0,))  # Initializes an empty ndarray for input vectors\n        self._Y = np.empty((0,))\n        self._data_handler.reset_task(task_name, design_space)\n        self.sync_data(self._data_handler.get_input_vectors(), self._data_handler.get_output_value())\n        self.model_reset()\n\n    def model_manage_strategy(self):\n        self.ea.evaluator.eval(self.problem, self.pop)\n        pop_X = np.array([p.X for p in self.pop])\n        pop_F = np.array([p.F for p in self.pop])\n        if self.model_manage == 'best':\n            top_k_idx = sorted(range(len(pop_F)), key=lambda i: pop_F[i])[:self.k]\n            elites = self.pop_X[top_k_idx]\n        elif self.model_manage == 'pre-select':\n            total_pop_X = pop_X\n            total_pop_F = pop_F\n            for i in range(self.k - 1):\n                pop = self.ea.ask()\n                self.ea.evaluator.eval(self.problem, pop)\n                pop_X = np.array([p.X for p in pop])\n                pop_F = np.array([p.F for p in pop])\n                total_pop_X = np.concatenate((total_pop_X, pop_X))\n                total_pop_F = np.concatenate((total_pop_F, pop_F))\n            top_k_idx = sorted(range(len(total_pop_F)), key=lambda i: total_pop_F[i])[:self.ini_num]\n            elites = total_pop_X[top_k_idx]\n        elif self.model_manage == 'generation':\n            for i in range(self.k - 1):\n                pop = self.ea.ask()\n            self.ea.evaluator.eval(self.problem, pop)\n            pop_X = np.array([p.X for p in pop])\n            top_k_idx = range(len(pop_X))\n            elites = pop_X\n        else:\n            raise ValueError(f\"Invalid model manage strategy: {self.model_manage}\")\n        self.elites_idx = top_k_idx\n        return elites\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/SingleObjOptimizer/RGPEOptimizer.py",
    "content": "import numpy as np\nimport GPy\n\nfrom transopt.utils.serialization import ndarray_to_vectors\nfrom agent.registry import optimizer_register\nfrom transopt.utils.Normalization import get_normalizer\nfrom typing import Dict, Union, List, Tuple\nfrom transopt.optimizer.optimizer_base import BOBase\nfrom optimizer.model.rgpe import RGPE\n\n@optimizer_register(\"RGPE\")\nclass RGPEOptimizer(BOBase):\n    def __init__(self, config: Dict, **kwargs):\n        super(RGPEOptimizer, self).__init__(config=config)\n        self.init_method = 'Random'\n\n        if 'verbose' in config:\n            self.verbose = config['verbose']\n        else:\n            self.verbose = True\n\n        if 'init_number' in config:\n            self.ini_num = config['init_number']\n        else:\n            self.ini_num = None\n\n        if 'acf' in config:\n            self.acf = config['acf']\n        else:\n            self.acf = 'EI'\n\n\n    def initial_sample(self):\n        return self.random_sample(self.ini_num)\n\n    def random_sample(self, num_samples: int) -> List[Dict]:\n        \"\"\"\n        Initialize random samples.\n\n        :param num_samples: Number of random samples to generate\n        :return: List of dictionaries, each representing a random sample\n        \"\"\"\n        if self.input_dim is None:\n            raise ValueError(\"Input dimension is not set. Call set_search_space() to set the input dimension.\")\n\n        random_samples = []\n        for _ in range(num_samples):\n            sample = {}\n            for var_info in self.search_space.config_space:\n                var_name = var_info['name']\n                var_domain = var_info['domain']\n                # Generate a random floating-point number within the specified range\n                random_value = np.random.uniform(var_domain[0], var_domain[1])\n                sample[var_name] = random_value\n            random_samples.append(sample)\n\n        random_samples = self.inverse_transform(random_samples)\n        return random_samples\n\n    def model_reset(self):\n        if self.obj_model is None:\n            self.obj_model = RGPE(n_features=self.input_dim)\n        if self.obj_model.target_model is not None:\n            self.meta_update()\n        if self._X.size != 0:\n            self.obj_model.fit({'X':self._X, 'Y':self._Y})\n\n\n    def meta_update(self):\n        self.obj_model.meta_update()\n    def suggest(self, n_suggestions:Union[None, int] = None) ->List[Dict]:\n        if self._X.size == 0:\n            suggests = self.initial_sample()\n            return suggests\n        elif self._X.shape[0] < self.ini_num:\n            pass\n        else:\n            if 'normalize' in self.config:\n                self.normalizer = get_normalizer(self.config['normalize'])\n\n            Data = {}\n            Data['Target'] = {'X':self._X, 'Y':self._Y}\n            self.update_model(Data)\n            suggested_sample, acq_value = self.evaluator.compute_batch(None, context_manager=None)\n            suggested_sample = self.search_space.zip_inputs(suggested_sample)\n            suggested_sample = ndarray_to_vectors(self._get_var_name('search'), suggested_sample)\n            design_suggested_sample = self.inverse_transform(suggested_sample)\n\n            return design_suggested_sample\n\n    def create_model(self):\n        self.obj_model = RGPE(self.input_dim)\n\n    def create_model(self, model_name, Source_data, Target_data):\n        self.model_name = model_name\n        source_num = len(Source_data['Y'])\n        self.output_dim = source_num + 1\n\n        ##Meta Date\n        meta_data = {}\n        for i in range(source_num):\n            meta_data[i] = TaskData(X=Source_data['X'][i], Y=Source_data['Y'][i])\n\n        ###Construct objective model\n        if self.model_name == 'RGPE':\n            self.obj_model = get_model('RGPE', self.model_space)\n            self.obj_model.meta_fit(meta_data)\n            self.obj_model.fit(TaskData(Target_data['X'], Target_data['Y']), optimize=True)\n        elif self.model_name == 'SGPT_POE':\n            self.obj_model = SGPT_POE(n_features=self.Xdim, beta=1)\n            self.obj_model.meta_fit(meta_data)\n            self.obj_model.fit(TaskData(Target_data['X'], Target_data['Y']), optimize=True)\n        elif self.model_name == 'SGPT_M':\n            self.obj_model = SGPT_M(n_features=self.Xdim)\n            self.obj_model.meta_fit(meta_data)\n            self.obj_model.fit(TaskData(Target_data['X'], Target_data['Y']), optimize=True)\n        else:\n            if self.kernel == None or self.kernel == 'RBF':\n                kern = GPy.kern.RBF(self.Xdim, ARD=True)\n            else:\n                kern = GPy.kern.RBF(self.Xdim, ARD=True)\n            X = Target_data['X']\n            Y = Target_data['Y']\n\n            self.obj_model = GPy.models.GPRegression(X, Y, kernel=kern)\n            self.obj_model['Gaussian_noise.*variance'].constrain_bounded(1e-9, 1e-3)\n            try:\n                self.obj_model.optimize_restarts(messages=True, num_restarts=1, verbose=self.verbose)\n            except np.linalg.linalg.LinAlgError as e:\n                # break\n                print('Error: np.linalg.linalg.LinAlgError')\n\n    def updateModel(self, Target_data):\n        ## Train target model\n        if self.model_name == 'RGPE' or \\\n            self.model_name == 'SGPT_POE' or self.model_name == 'SGPT_M':\n            self.obj_model.fit(TaskData(Target_data['X'], Target_data['Y']), optimize=True)\n\n        else:\n            X = Target_data['X']\n            Y = Target_data['Y']\n            self.obj_model.set_XY(X, Y)\n            try:\n                self.obj_model.optimize_restarts(messages=True, num_restarts=1,\n                                                 verbose=self.verbose)\n            except np.linalg.linalg.LinAlgError as e:\n                # break\n                print('Error: np.linalg.linalg.LinAlgError')\n\n            return None\n\n    def reset_target(self):\n        self.obj_model.reset_target()\n\n    def meta_add(self, meta_data):\n        self.obj_model.meta_add(meta_data)\n\n    def resetModel(self, Source_data, Target_data):\n        ## Train target model\n        pass\n\n    def get_train_time(self):\n        return self.fit_time\n\n    def get_fit_time(self):\n        return self.acf_time\n\n\n    def predict(\n        self, X, return_full: bool = False, with_noise: bool = False\n    ) -> Tuple[np.ndarray, np.ndarray]:\n\n        # returned mean: sum of means of the predictions of all source and target GPs\n        mu, var = self.obj_model.predict(X, return_full=return_full)\n\n        return mu, var\n\n\n    def obj_posterior_samples(self, X, sample_size):\n        if X.ndim == 1:\n            X = X[None,:]\n        task_id = self.output_dim - 1\n\n        if self.model_name == 'SGPT_POE' or self.model_name == 'SGPT_M' or\\\n                self.model_name == 'RGPE':\n            samples_obj = self.posterior_samples(X, model_id=0,size=sample_size)\n\n        else:\n            raise NameError\n\n        return samples_obj\n\n    def update_model(self, Data: Dict):\n        ## Train target model\n        if self.obj_model is None:\n            self.create_model(Data['Target'])\n        elif self.obj_model is not None:\n            self.obj_model.set_XY(Data['Target'])\n        else:\n            self.obj_model.set_XY(Data['Target'])\n\n        ## Train target model\n        self.obj_model.fit(Data['Target'], optimize=True)\n\n    def get_fmin(self):\n        \"Get the minimum of the current model.\"\n        m, v = self.predict(self.obj_model.X)\n\n        return m.min()\n\n    def set_XY(self, X=None, Y=None):\n        if isinstance(X, list):\n            X, _, self.obj_model.output_index = util.multioutput.build_XY(X, None)\n        if isinstance(Y, list):\n            _, Y, self.obj_model.output_index = util.multioutput.build_XY(Y, Y)\n\n        self.obj_model.update_model(False)\n        if Y is not None:\n            self.obj_model.Y = ObsAr(Y)\n            self.obj_model.Y_normalized = self.obj_model.Y\n        if X is not None:\n            self.obj_model.X = ObsAr(X)\n\n        self.obj_model.Y_metadata = {'output_index': self.obj_model.output_index, 'trials': np.ones(self.obj_model.output_index.shape)}\n        if isinstance(self.obj_model.inference_method, expectation_propagation.EP):\n            self.obj_model.inference_method.reset()\n        self.obj_model.update_model(True)\n\n    def samples(self, gp):\n        \"\"\"\n        Returns a set of samples of observations based on a given value of the latent variable.\n\n        :param gp: latent variable\n        \"\"\"\n        orig_shape = gp.shape\n        gp = gp.flatten()\n        #orig_shape = gp.shape\n        gp = gp.flatten()\n        Ysim = np.array([np.random.normal(gpj, scale=np.sqrt(1e-2), size=1) for gpj in gp])\n        return Ysim.reshape(orig_shape)\n\n    def posterior_samples_f(self,X, model_id, size=10):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: The points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :returns: set of simulations\n        :rtype: np.ndarray (Nnew x D x samples)\n        \"\"\"\n        m, v = self.obj_model.predict(X, return_full=True)\n\n        def sim_one_dim(m, v):\n            return np.random.multivariate_normal(m, v, size).T\n\n        return sim_one_dim(m.flatten(), v)[:, np.newaxis, :]\n\n\n    def posterior_samples(self, X, model_id, size=10):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: the points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim.)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :param noise_model: for mixed noise likelihood, the noise model to use in the samples.\n        :type noise_model: integer.\n        :returns: Ysim: set of simulations,\n        :rtype: np.ndarray (D x N x samples) (if D==1 we flatten out the first dimension)\n        \"\"\"\n        fsim = self.posterior_samples_f(X, model_id=model_id, size=size)\n\n        if fsim.ndim == 3:\n            for d in range(fsim.shape[1]):\n                fsim[:, d] = self.samples(fsim[:, d])\n        else:\n            fsim = self.samples(fsim)\n        return fsim"
  },
  {
    "path": "transopt/optimizer/SingleObjOptimizer/TPEOptimizer.py",
    "content": "import numpy as np\nfrom typing import Dict, List, Union\nfrom transopt.optimizer.optimizer_base import BOBase\nfrom transopt.utils.serialization import ndarray_to_vectors\nfrom agent.registry import optimizer_register\n\nfrom transopt.utils.Normalization import get_normalizer\n\n\n@optimizer_register('TPE')\nclass TPEOptimizer(BOBase):\n    def __init__(self, config:Dict, **kwargs):\n        super(TPEOptimizer, self).__init__(config=config)\n\n        self.init_method = 'Random'\n        self.model = None\n\n        if 'verbose' in config:\n            self.verbose = config['verbose']\n        else:\n            self.verbose = True\n\n        if 'init_number' in config:\n            self.ini_num = config['init_number']\n        else:\n            self.ini_num = None\n\n        if 'acf' in config:\n            self.acf = config['acf']\n        else:\n            self.acf = 'EI'\n\n        self.obj_model = None\n\n\n\n    def initial_sample(self):\n        return self.random_sample(self.ini_num)\n\n    def suggest(self, n_suggestions:Union[None, int] = None) ->List[Dict]:\n        if self._X.size == 0:\n            suggests = self.initial_sample()\n            return suggests\n        elif self._X.shape[0] < self.ini_num:\n            pass\n        else:\n            if 'normalize' in self.config:\n                self.normalizer = get_normalizer(self.config['normalize'])\n\n\n            Data = {'Target':{'X':self._X, 'Y':self._Y}}\n            self.update_model(Data)\n            suggested_sample, acq_value = self.evaluator.compute_batch(None, context_manager=None)\n            suggested_sample = self.search_space.zip_inputs(suggested_sample)\n            suggested_sample = ndarray_to_vectors(self._get_var_name('search'), suggested_sample)\n            design_suggested_sample = self.inverse_transform(suggested_sample)\n\n            return design_suggested_sample\n\n    def update_model(self, Data):\n        assert 'Target' in Data\n        target_data = Data['Target']\n        X = target_data['X']\n        Y = target_data['Y']\n\n        if self.normalizer is not None:\n            Y = self.normalizer(Y)\n\n        if self.obj_model == None:\n            self.create_model(X, Y)\n        else:\n            self.obj_model.set_XY(X, Y)\n\n        try:\n            self.obj_model.optimize_restarts(num_restarts=1, verbose=self.verbose, robust=True)\n        except np.linalg.linalg.LinAlgError as e:\n            # break\n            print('Error: np.linalg.linalg.LinAlgError')\n\n    def create_model(self, X, Y):\n        self.obj_model = TPEOptimizer()\n\n\n\n    def predict(self, X):\n        \"\"\"\n        Predictions with the model. Returns posterior means and standard deviations at X. Note that this is different in GPy where the variances are given.\n\n        Parameters:\n            X (np.ndarray) - points to run the prediction for.\n            with_noise (bool) - whether to add noise to the prediction. Default is True.\n        \"\"\"\n        if X.ndim == 1:\n            X = X[None,:]\n\n        m, v = self.obj_model.predict(X)\n\n        # We can take the square root because v is just a diagonal matrix of variances\n        return m, v\n\n\n    def random_sample(self, num_samples: int) -> List[Dict]:\n        \"\"\"\n        Initialize random samples.\n\n        :param num_samples: Number of random samples to generate\n        :return: List of dictionaries, each representing a random sample\n        \"\"\"\n        if self.input_dim is None:\n            raise ValueError(\"Input dimension is not set. Call set_search_space() to set the input dimension.\")\n\n        random_samples = []\n        for _ in range(num_samples):\n            sample = {}\n            for var_info in self.search_space.config_space:\n                var_name = var_info['name']\n                var_domain = var_info['domain']\n                # Generate a random floating-point number within the specified range\n                random_value = np.random.uniform(var_domain[0], var_domain[1])\n                sample[var_name] = random_value\n            random_samples.append(sample)\n\n        random_samples = self.inverse_transform(random_samples)\n        return random_samples\n\n    def model_reset(self):\n        self.obj_model = None\n\n    def get_fmin(self):\n        \"Get the minimum of the current model.\"\n        m, v = self.predict(self.obj_model.X)\n\n        return m.min()\n\n    def posterior_samples(self, X, model_id, size=10):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: the points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim.)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :param noise_model: for mixed noise likelihood, the noise model to use in the samples.\n        :type noise_model: integer.\n        :returns: Ysim: set of simulations,\n        :rtype: np.ndarray (D x N x samples) (if D==1 we flatten out the first dimension)\n        \"\"\"\n        fsim = self.posterior_samples_f(X, model_id=model_id, size=size)\n\n        if fsim.ndim == 3:\n            for d in range(fsim.shape[1]):\n                fsim[:, d] = self.samples(fsim[:, d])\n        else:\n            fsim = self.samples(fsim)\n        return fsim"
  },
  {
    "path": "transopt/optimizer/SingleObjOptimizer/VizerOptimizer.py",
    "content": "import numpy as np\n\nfrom transopt.utils.serialization import ndarray_to_vectors\nfrom agent.registry import optimizer_register\nfrom transopt.utils.Normalization import get_normalizer\nfrom transopt.optimizer.model.MHGP import MHGP\nfrom typing import Dict, Union, List, Tuple\nfrom transopt.optimizer.optimizer_base import BOBase\n\n\n@optimizer_register('vizer')\nclass Vizer(BOBase):\n\n    def __init__(self, config: Dict, **kwargs):\n        super(Vizer, self).__init__(config=config)\n        self.init_method = 'Random'\n\n        if 'verbose' in config:\n            self.verbose = config['verbose']\n        else:\n            self.verbose = True\n\n        if 'init_number' in config:\n            self.ini_num = config['init_number']\n        else:\n            self.ini_num = None\n\n        if 'acf' in config:\n            self.acf = config['acf']\n        else:\n            self.acf = 'EI'\n\n\n    def model_reset(self):\n        if self.obj_model is None:\n            self.obj_model = MHGP(n_features=self.input_dim)\n        if self.obj_model.target_gp is not None:\n            self.meta_update()\n            self.obj_model.fit({'X':self._X, 'Y':self._Y})\n        if self.obj_model.target_gp is None and self._X.size != 0:\n            self.obj_model.fit({'X':self._X, 'Y':self._Y})\n\n    def initial_sample(self):\n        return self.random_sample(self.ini_num)\n\n    def random_sample(self, num_samples: int) -> List[Dict]:\n        \"\"\"\n        Initialize random samples.\n\n        :param num_samples: Number of random samples to generate\n        :return: List of dictionaries, each representing a random sample\n        \"\"\"\n        if self.input_dim is None:\n            raise ValueError(\"Input dimension is not set. Call set_search_space() to set the input dimension.\")\n\n        random_samples = []\n        for _ in range(num_samples):\n            sample = {}\n            for var_info in self.search_space.config_space:\n                var_name = var_info['name']\n                var_domain = var_info['domain']\n                # Generate a random floating-point number within the specified range\n                random_value = np.random.uniform(var_domain[0], var_domain[1])\n                sample[var_name] = random_value\n            random_samples.append(sample)\n\n        random_samples = self.inverse_transform(random_samples)\n        return random_samples\n\n    def suggest(self, n_suggestions:Union[None, int] = None) ->List[Dict]:\n        if self._X.size == 0:\n            suggests = self.initial_sample()\n            return suggests\n        elif self._X.shape[0] < self.ini_num:\n            pass\n        else:\n            if 'normalize' in self.config:\n                self.normalizer = get_normalizer(self.config['normalize'])\n\n            Data = {}\n            Data['Target'] = {'X':self._X, 'Y':self._Y}\n            self.update_model(Data)\n            suggested_sample, acq_value = self.evaluator.compute_batch(None, context_manager=None)\n            suggested_sample = self.search_space.zip_inputs(suggested_sample)\n            suggested_sample = ndarray_to_vectors(self._get_var_name('search'), suggested_sample)\n            design_suggested_sample = self.inverse_transform(suggested_sample)\n\n            return design_suggested_sample\n\n    def meta_update(self):\n        self.obj_model.meta_update()\n\n    def meta_add(self, Data:List[Dict]):\n        self.obj_model.meta_add(Data)\n\n    def create_model(self):\n        self.obj_model = MHGP(self.input_dim)\n\n\n    def update_model(self, Data):\n        ## Train target model\n        if self.obj_model is None:\n            self.create_model(Data['Target'])\n        elif self.obj_model is not None:\n            self.obj_model.set_XY(Data['Target'])\n        else:\n            self.obj_model.set_XY(Data['Target'])\n\n        ## Train target model\n        self.obj_model.fit(Data['Target'], optimize=True)\n\n\n    def MetaFitModel(self, metadata):\n\n        if self.model_name == 'SHGP' or \\\n            self.model_name == 'MHGP' or \\\n            self.model_name == 'BHGP':\n\n            self.obj_model.meta_fit(metadata)\n\n    # def meta_add(self, meta_data:Dict):\n    #     self.obj_model.meta_add(meta_data:Dict)\n\n\n    def get_train_time(self):\n        return self.fit_time\n\n    def get_fit_time(self):\n        return self.acf_time\n\n\n    def predict(\n        self, X, return_full: bool = False, with_noise: bool = False\n    ) -> Tuple[np.ndarray, np.ndarray]:\n\n        # returned mean: sum of means of the predictions of all source and target GPs\n        mu, var = self.obj_model.predict(X, return_full=return_full, with_noise=with_noise)\n\n        return mu, var\n\n\n    def obj_posterior_samples(self, X, sample_size):\n        if X.ndim == 1:\n            X = X[None,:]\n        task_id = self.output_dim - 1\n\n        if self.model_name == 'SHGP' or \\\n                self.model_name == 'HGP' or \\\n                self.model_name == 'MHGP' or \\\n                self.model_name == 'BHGP' or \\\n                self.model_name == 'RPGE':\n            samples_obj = self.posterior_samples(X, model_id=0,size=sample_size)\n        elif self.model_name == 'MOGP':\n            noise_dict = {'output_index': np.array([task_id] * X.shape[0])[:, np.newaxis].astype(int)}\n            X_zip = np.hstack((X, noise_dict['output_index']))\n\n            samples_obj = self.obj_model.posterior_samples(X_zip, size=sample_size, Y_metadata=noise_dict) # grid * 1 * sample_num\n\n        else:\n            raise NameError\n\n        return samples_obj\n\n    def get_fmin(self):\n        \"Get the minimum of the current model.\"\n        m, v = self.predict(self.obj_model.X)\n\n        return m.min()\n\n    def samples(self, gp):\n        \"\"\"\n        Returns a set of samples of observations based on a given value of the latent variable.\n\n        :param gp: latent variable\n        \"\"\"\n        orig_shape = gp.shape\n        gp = gp.flatten()\n        # orig_shape = gp.shape\n        gp = gp.flatten()\n        Ysim = np.array([np.random.normal(gpj, scale=np.sqrt(1e-2), size=1) for gpj in gp])\n        return Ysim.reshape(orig_shape)\n\n\n    def posterior_samples_f(self,X, model_id, size=10):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: The points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :returns: set of simulations\n        :rtype: np.ndarray (Nnew x D x samples)\n        \"\"\"\n        m, v = self.obj_model.predict(X, return_full=True)\n\n        def sim_one_dim(m, v):\n            return np.random.multivariate_normal(m, v, size).T\n\n        return sim_one_dim(m.flatten(), v)[:, np.newaxis, :]\n\n\n    def posterior_samples(self, X, model_id, size=10):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: the points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim.)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :param noise_model: for mixed noise likelihood, the noise model to use in the samples.\n        :type noise_model: integer.\n        :returns: Ysim: set of simulations,\n        :rtype: np.ndarray (D x N x samples) (if D==1 we flatten out the first dimension)\n        \"\"\"\n        fsim = self.posterior_samples_f(X, model_id=model_id, size=size)\n\n        if fsim.ndim == 3:\n            for d in range(fsim.shape[1]):\n                fsim[:, d] = self.samples(fsim[:, d])\n        else:\n            fsim = self.samples(fsim)\n        return fsim"
  },
  {
    "path": "transopt/optimizer/SingleObjOptimizer/__init__.py",
    "content": "from transopt.optimizer.SingleObjOptimizer.KrigingOptimizer import KrigingGA\nfrom transopt.optimizer.SingleObjOptimizer.LFL import LFLOptimizer\nfrom transopt.optimizer.SingleObjOptimizer.MetaLearningOptimizer import MetaBOOptimizer\nfrom transopt.optimizer.SingleObjOptimizer.MultitaskOptimizer import MultitaskBO\nfrom transopt.optimizer.SingleObjOptimizer.RGPEOptimizer import RGPEOptimizer\nfrom transopt.optimizer.SingleObjOptimizer.TPEOptimizer import TPEOptimizer\nfrom optimizer.SingleObjOptimizer.TLBO import VanillaBO\nfrom transopt.optimizer.SingleObjOptimizer.VizerOptimizer import Vizer"
  },
  {
    "path": "transopt/optimizer/__init__.py",
    "content": "# from transopt.optimizer.model.get_model import get_model\n# from transopt.optimizer.sampler.get_sampler import get_sampler\n# from transopt.optimizer.refiner.get_refiner import get_refiner\n# from transopt.optimizer.pretrain.get_pretrain import get_pretrain\n# from transopt.optimizer.acquisition_function import get_acf\n\n\n \n\n\n"
  },
  {
    "path": "transopt/optimizer/acquisition_function/ConformalLCB.py",
    "content": "# Copyright (c) 2016, the GPyOpt Authors\n# Licensed under the BSD 3-clause license (see LICENSE.txt)\n\nfrom GPyOpt.acquisitions.base import AcquisitionBase\nfrom agent.registry import acf_registry\n\n@acf_registry.register('ConformalLCB')\nclass ConformalLCB(AcquisitionBase):\n    \"\"\"\n    GP-Lower Confidence Bound acquisition function with constant exploration weight.\n    See:\n\n    Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design\n    Srinivas et al., Proc. International Conference on Machine Learning (ICML), 2010\n\n    :param Model: GPyOpt class of model\n    :param space: GPyOpt class of domain\n    :param optimizer: optimizer of the acquisition. Should be a GPyOpt optimizer\n    :param cost_withGradients: function\n    :param jitter: positive value to make the acquisition more explorative\n\n    .. Note:: does not allow to be used with cost\n\n    \"\"\"\n\n    analytical_gradient_prediction = False\n\n    def __init__(self, model, space, optimizer, config):\n        self.optimizer = optimizer\n        super(ConformalLCB, self).__init__(model, space, optimizer)\n        if 'exploration_weight' in config:\n            self.exploration_weight = config['exploration_weight']\n        else:\n            self.exploration_weight = 1\n\n    def _compute_acq(self, x):\n        \"\"\"\n        Computes the GP-Lower Confidence Bound\n        \"\"\"\n        if self.model.qhats is not None and self.model.model_name == 'MOGP':\n            m, s = self.model.conformal_prediction(x)\n        else:\n            m, s = self.model.predict(x)\n\n        f_acqu = -m + self.exploration_weight * s\n        return f_acqu\n\n    def _compute_acq_withGradients(self, x):\n        \"\"\"\n        Computes the GP-Lower Confidence Bound and its derivative\n        \"\"\"\n        m, s, dmdx, dsdx = self.model.predict_withGradients(x)\n        f_acqu = -m + self.exploration_weight * s\n        df_acqu = -dmdx + self.exploration_weight * dsdx\n        return f_acqu, df_acqu\n\n"
  },
  {
    "path": "transopt/optimizer/acquisition_function/__init__.py",
    "content": "from transopt.optimizer.acquisition_function.sequential import Sequential\n\nfrom transopt.optimizer.acquisition_function.ei import AcquisitionEI\nfrom transopt.optimizer.acquisition_function.lcb import AcquisitionLCB\nfrom transopt.optimizer.acquisition_function.pi import AcquisitionPI\nfrom transopt.optimizer.acquisition_function.taf import AcquisitionTAF\n\n# from transopt.optimizer.acquisition_function.SMSEGO import SMSEGO\n# from transopt.optimizer.acquisition_function.MOEADEGO import MOEADEGO\n# from transopt.optimizer.acquisition_function.CauMOACF import CauMOACF\n\nfrom transopt.optimizer.acquisition_function.model_manage.GABest import GABest\nfrom transopt.optimizer.acquisition_function.model_manage.GAPreSelect import GAPreSelect\nfrom transopt.optimizer.acquisition_function.model_manage.GAGeneration import GAGeneration\nfrom transopt.optimizer.acquisition_function.model_manage.DEBest import DEBest\nfrom transopt.optimizer.acquisition_function.model_manage.DEPreSelect import DEPreSelect\nfrom transopt.optimizer.acquisition_function.model_manage.DEGeneration import DEGeneration\nfrom transopt.optimizer.acquisition_function.model_manage.PSOBest import PSOBest\nfrom transopt.optimizer.acquisition_function.model_manage.PSOPreSelect import PSOPreSelect\nfrom transopt.optimizer.acquisition_function.model_manage.PSOGeneration import PSOGeneration\nfrom transopt.optimizer.acquisition_function.model_manage.CMAESBest import CMAESBest\nfrom transopt.optimizer.acquisition_function.model_manage.CMAESPreSelect import CMAESPreSelect\nfrom transopt.optimizer.acquisition_function.model_manage.CMAESGeneration import CMAESGeneration\n"
  },
  {
    "path": "transopt/optimizer/acquisition_function/acf_base.py",
    "content": "# Copyright (c) 2016, the GPyOpt Authors\n# Licensed under the BSD 3-clause license (see LICENSE.txt)\nimport numpy as np\nimport scipy\nfrom GPyOpt import Design_space\nfrom GPyOpt.core.task.cost import constant_cost_withGradients\nfrom GPyOpt.optimization.acquisition_optimizer import AcquisitionOptimizer\nfrom GPyOpt.util import epmgp\n\n\nclass AcquisitionBase(object):\n    \"\"\"\n    Base class for acquisition functions in Bayesian Optimization\n\n    :param model: GPyOpt class of model\n    :param space: GPyOpt class of domain\n    :param optimizer: optimizer of the acquisition. Should be a GPyOpt optimizer\n\n    \"\"\"\n\n    analytical_gradient_prediction = False\n\n    def __init__(self, cost_withGradients=None, **kwargs):\n        self.analytical_gradient_acq = self.analytical_gradient_prediction and self.model.analytical_gradient_prediction # flag from the model to test if gradients are available\n\n        if 'optimizer_name' in kwargs:\n            self.optimizer_name = kwargs['optimizer']\n        else:\n            self.optimizer_name = 'lbfgs'\n\n        if cost_withGradients is  None:\n            self.cost_withGradients = constant_cost_withGradients\n        else:\n            self.cost_withGradients = cost_withGradients\n\n    @staticmethod\n    def fromDict(model, space, optimizer, cost_withGradients, config):\n        raise NotImplementedError()\n    \n    def link(self, model, space):\n        self.link_model(model=model)\n        self.link_space(space=space)\n    \n    def link_model(self, model):\n        self.model = model\n        \n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n        self.optimizer = AcquisitionOptimizer(self.space, self.optimizer_name)\n    \n\n    def acquisition_function(self,x):\n        \"\"\"\n        Takes an acquisition and weights it so the domain and cost are taken into account.\n        \"\"\"\n        f_acqu = self._compute_acq(x)\n        cost_x, _ = self.cost_withGradients(x)\n        x_z = x if self.space.model_dimensionality == self.space.objective_dimensionality else self.space.zip_inputs(x)\n        return -(f_acqu*self.space.indicator_constraints(x_z))/cost_x\n\n\n    def acquisition_function_withGradients(self, x):\n        \"\"\"\n        Takes an acquisition and it gradient and weights it so the domain and cost are taken into account.\n        \"\"\"\n        f_acqu,df_acqu = self._compute_acq_withGradients(x)\n        cost_x, cost_grad_x = self.cost_withGradients(x)\n        f_acq_cost = f_acqu/cost_x\n        df_acq_cost = (df_acqu*cost_x - f_acqu*cost_grad_x)/(cost_x**2)\n        x_z = x if self.space.model_dimensionality == self.space.objective_dimensionality else self.space.zip_inputs(x)\n        return -f_acq_cost*self.space.indicator_constraints(x_z), -df_acq_cost*self.space.indicator_constraints(x_z)\n\n    def optimize(self, duplicate_manager=None):\n        \"\"\"\n        Optimizes the acquisition function (uses a flag from the model to use gradients or not).\n        \"\"\"\n        if not self.analytical_gradient_acq:\n            out = self.optimizer.optimize(f=self.acquisition_function, duplicate_manager=duplicate_manager)\n        else:\n            out = self.optimizer.optimize(f=self.acquisition_function, f_df=self.acquisition_function_withGradients, duplicate_manager=duplicate_manager)\n        return out\n\n    def _compute_acq(self,x):\n\n        raise NotImplementedError('')\n\n    def _compute_acq_withGradients(self, x):\n\n        raise NotImplementedError('')\n"
  },
  {
    "path": "transopt/optimizer/acquisition_function/ei.py",
    "content": "import copy\n\nfrom GPyOpt.acquisitions.base import AcquisitionBase\nfrom GPyOpt.core.task.cost import constant_cost_withGradients\nfrom GPyOpt.util.general import get_quantiles\n\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n@acf_registry.register('EI')\nclass AcquisitionEI(AcquisitionBase):\n    \"\"\"\n    General template to create a new GPyOPt acquisition function\n\n    :param model: GPyOpt class of model\n    :param space: GPyOpt class of domain\n    :param optimizer: optimizer of the acquisition. Should be a GPyOpt optimizer\n    :param cost_withGradients: function that provides the evaluation cost and its gradients\n\n    \"\"\"\n    # --- Set this line to true if analytical gradients are available\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(AcquisitionEI, self).__init__()\n        if 'jitter' in config:\n            self.jitter = config['jitter']\n        else:\n            self.jitter = 0.01\n\n        if 'threshold' in config:\n            self.threshold = config['threshold']\n        else:\n            self.threshold = 0\n\n        self.cost_withGradients = constant_cost_withGradients\n\n    def _compute_acq(self, x):\n\n        m, s = self.model.predict(x)\n        fmin = self.model.get_fmin()\n        phi, Phi, u = get_quantiles(self.jitter, fmin, m, s)\n        f_acqu_ei = s * (u * Phi + phi)\n\n        return f_acqu_ei\n\n    def _compute_acq_withGradients(self, x):\n        # --- DEFINE YOUR AQUISITION (TO BE MAXIMIZED) AND ITS GRADIENT HERE HERE\n        #\n        # Compute here the value of the new acquisition function. Remember that x is a 2D  numpy array\n        # with a point in the domanin in each row. f_acqu_x should be a column vector containing the\n        # values of the acquisition at x. df_acqu_x contains is each row the values of the gradient of the\n        # acquisition at each point of x.\n        #\n        # NOTE: this function is optional. If note available the gradients will be approxiamted numerically.\n        raise NotImplementedError()\n"
  },
  {
    "path": "transopt/optimizer/acquisition_function/get_acf.py",
    "content": "\nfrom transopt.agent.registry import acf_registry\n\ndef get_acf(acf_name, **kwargs):\n    \"\"\"Create the optimizer object.\"\"\"\n    acf_class = acf_registry.get(acf_name)\n\n    if acf_class is not None:\n        acf = acf_class(config=kwargs)\n    else:\n        print(f\"ACF '{acf_name}' not found in the registry.\")\n        raise NameError\n    return acf\n\n\n\n# def get_acf(acf_name, model, search_space, config, tabular=False):\n#     \"\"\"Create the optimizer object.\"\"\"\n#     acf_class = get_acf.get(acf_name)\n#     acquisition_optimizer = GPyOpt.optimization.AcquisitionOptimizer(search_space)\n#     if acf_class is not None:\n#         acquisition = acf_class(model=model, optimizer=acquisition_optimizer, space=search_space, config=config)\n#     else:\n#         # 处理任务名称不在注册表中的情况c\n#         print(f\"Acquisition '{acf_name}' not found in the registry.\")\n#         raise NameError\n\n#     return acquisition\n\n"
  },
  {
    "path": "transopt/optimizer/acquisition_function/lcb.py",
    "content": "# Copyright (c) 2016, the GPyOpt Authors\n# Licensed under the BSD 3-clause license (see LICENSE.txt)\n\nfrom GPyOpt.acquisitions.base import AcquisitionBase\n\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n@acf_registry.register('LCB')\nclass AcquisitionLCB(AcquisitionBase):\n    \"\"\"\n    GP-Lower Confidence Bound acquisition function with constant exploration weight.\n    See:\n\n    Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design\n    Srinivas et al., Proc. International Conference on Machine Learning (ICML), 2010\n\n    :param model: GPyOpt class of model\n    :param space: GPyOpt class of domain\n    :param optimizer: optimizer of the acquisition. Should be a GPyOpt optimizer\n    :param cost_withGradients: function\n    :param jitter: positive value to make the acquisition more explorative\n\n    .. Note:: does not allow to be used with cost\n\n    \"\"\"\n\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(AcquisitionLCB, self).__init__()\n        if 'exploration_weight' in config:\n            self.exploration_weight = config['exploration_weight']\n        else:\n            self.exploration_weight = 1\n\n    def _compute_acq(self, x):\n        \"\"\"\n        Computes the GP-Lower Confidence Bound\n        \"\"\"\n        m, s = self.model.predict(x)\n        f_acqu = -m + self.exploration_weight * s\n        return f_acqu\n\n    def _compute_acq_withGradients(self, x):\n        \"\"\"\n        Computes the GP-Lower Confidence Bound and its derivative\n        \"\"\"\n        m, s, dmdx, dsdx = self.model.predict_withGradients(x)\n        f_acqu = -m + self.exploration_weight * s\n        df_acqu = -dmdx + self.exploration_weight * dsdx\n        return f_acqu, df_acqu\n\n"
  },
  {
    "path": "transopt/optimizer/acquisition_function/model_manage/CMAESBest.py",
    "content": "import math\n\nimport numpy as np\nfrom GPyOpt import Design_space\nfrom pymoo.algorithms.soo.nonconvex.cmaes import CMAES\nfrom pymoo.core.problem import Problem\n\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('CMAES-Best')\nclass CMAESBest(AcquisitionBase):\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(CMAESBest, self).__init__()\n        config_dict = {}\n        if config != \"\":\n            if ',' in config:\n                key_value_pairs = config.split(',')\n            else:\n                key_value_pairs = [config]\n            for pair in key_value_pairs:\n                key, value = pair.split(':')\n                config_dict[key.strip()] = value.strip()\n        if 'k' in config_dict:\n            self.k = int(config_dict['k'])\n        else:\n            self.k = 2\n        if 'n' in config_dict:\n            self.pop_size = 4 + math.floor(3 * math.log(int(config_dict['n'])))\n        else:\n            self.pop_size = 10\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical' or 'integer':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n\n        if self.ea is None:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n            self.ea = CMAES(pop_size=self.pop_size)\n            self.ea.setup(self.problem, verbose=False)\n        else:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n\n    def optimize(self, duplicate_manager=None):\n        pop = self.ea.ask()\n        self.ea.evaluator.eval(self.problem, pop)\n        pop_X = np.array([p.X for p in pop])\n        pop_F = np.array([p.F for p in pop])\n        top_k_idx = sorted(range(len(pop_F)), key=lambda i: pop_F[i])[:self.k]\n        elites = pop_X[top_k_idx]\n        elites_F = pop_F[top_k_idx]\n        return elites, elites_F\n\n    def _compute_acq(self, x):\n        raise NotImplementedError()\n\n    def _compute_acq_withGradients(self, x):\n        raise NotImplementedError()\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/acquisition_function/model_manage/CMAESGeneration.py",
    "content": "import math\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom GPyOpt import Design_space\nfrom pymoo.algorithms.soo.nonconvex.cmaes import CMAES\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('CMAES-Generation')\nclass CMAESGeneration(AcquisitionBase):\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(CMAESGeneration, self).__init__()\n        config_dict = {}\n        if config != \"\":\n            if ',' in config:\n                key_value_pairs = config.split(',')\n            else:\n                key_value_pairs = [config]\n            for pair in key_value_pairs:\n                key, value = pair.split(':')\n                config_dict[key.strip()] = value.strip()\n        if 'k' in config_dict:\n            self.k = int(config_dict['k'])\n        else:\n            self.k = 1\n        if 'n' in config_dict:\n            self.pop_size = 4 + math.floor(3 * math.log(int(config_dict['n'])))\n        else:\n            self.pop_size = 10\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical' or 'integer':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n\n        if self.ea is None:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n            self.ea = CMAES(pop_size=self.pop_size)\n            self.ea.setup(self.problem, verbose=False)\n        else:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n\n    def optimize(self, duplicate_manager=None):\n        for i in range(self.k):\n            pop = self.ea.ask()\n        self.ea.evaluator.eval(self.problem, pop)\n        pop_X = np.array([p.X for p in pop])\n        pop_F = np.array([p.F for p in pop])\n        top_k_idx = range(len(pop_X))\n        elites = pop_X\n        elites_F = pop_F\n        return elites, elites_F\n\n    def _compute_acq(self, x):\n        raise NotImplementedError()\n\n    def _compute_acq_withGradients(self, x):\n        raise NotImplementedError()\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/acquisition_function/model_manage/CMAESPreSelect.py",
    "content": "import math\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom GPyOpt import Design_space\nfrom pymoo.algorithms.soo.nonconvex.cmaes import CMAES\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('CMAES-PreSelect')\nclass CMAESPreSelect(AcquisitionBase):\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(CMAESPreSelect, self).__init__()\n        config_dict = {}\n        if config != \"\":\n            if ',' in config:\n                key_value_pairs = config.split(',')\n            else:\n                key_value_pairs = [config]\n            for pair in key_value_pairs:\n                key, value = pair.split(':')\n                config_dict[key.strip()] = value.strip()\n        if 'k' in config_dict:\n            self.k = int(config_dict['k'])\n        else:\n            self.k = 2\n        if 'n' in config_dict:\n            self.pop_size = 4 + math.floor(3 * math.log(int(config_dict['n'])))\n        else:\n            self.pop_size = 10\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical' or 'integer':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n\n        if self.ea is None:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n            self.ea = CMAES(pop_size=self.pop_size)\n            self.ea.setup(self.problem, verbose=False)\n        else:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n\n    def optimize(self, duplicate_manager=None):\n        pop = self.ea.ask()\n        self.ea.evaluator.eval(self.problem, pop)\n        pop_X = np.array([p.X for p in pop])\n        pop_F = np.array([p.F for p in pop])\n        total_pop_X = pop_X\n        total_pop_F = pop_F\n        for i in range(self.k - 1):\n            pop = self.ea.ask()\n            self.ea.evaluator.eval(self.problem, pop)\n            pop_X = np.array([p.X for p in pop])\n            pop_F = np.array([p.F for p in pop])\n            total_pop_X = np.concatenate((total_pop_X, pop_X))\n            total_pop_F = np.concatenate((total_pop_F, pop_F))\n        top_k_idx = sorted(range(len(total_pop_F)), key=lambda i: total_pop_F[i])[:self.pop_size]\n        elites = total_pop_X[top_k_idx]\n        elites_F = total_pop_F[top_k_idx]\n        return elites, elites_F\n\n    def _compute_acq(self, x):\n        raise NotImplementedError()\n\n    def _compute_acq_withGradients(self, x):\n        raise NotImplementedError()\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/acquisition_function/model_manage/DEBest.py",
    "content": "import math\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom GPyOpt import Design_space\nfrom pymoo.algorithms.soo.nonconvex.de import DE\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('DE-Best')\nclass DEBest(AcquisitionBase):\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(DEBest, self).__init__()\n        config_dict = {}\n        if config != \"\":\n            if ',' in config:\n                key_value_pairs = config.split(',')\n            else:\n                key_value_pairs = [config]\n            for pair in key_value_pairs:\n                key, value = pair.split(':')\n                config_dict[key.strip()] = value.strip()\n        if 'k' in config_dict:\n            self.k = int(config_dict['k'])\n        else:\n            self.k = 2\n        if 'n' in config_dict:\n            self.pop_size = 4 + math.floor(3 * math.log(int(config_dict['n'])))\n        else:\n            self.pop_size = 10\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical' or 'integer':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n\n        if self.ea is None:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n            self.ea = DE(self.pop_size)\n            self.ea.setup(self.problem, verbose=False)\n        else:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n\n    def optimize(self, duplicate_manager=None):\n        pop = self.ea.ask()\n        self.ea.evaluator.eval(self.problem, pop)\n        pop_X = np.array([p.X for p in pop])\n        pop_F = np.array([p.F for p in pop])\n        top_k_idx = sorted(range(len(pop_F)), key=lambda i: pop_F[i])[:self.k]\n        elites = pop_X[top_k_idx]\n        elites_F = pop_F[top_k_idx]\n        return elites, elites_F\n\n    def _compute_acq(self, x):\n        raise NotImplementedError()\n\n    def _compute_acq_withGradients(self, x):\n        raise NotImplementedError()\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/acquisition_function/model_manage/DEGeneration.py",
    "content": "import math\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom GPyOpt import Design_space\nfrom pymoo.algorithms.soo.nonconvex.de import DE\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('DE-Generation')\nclass DEGeneration(AcquisitionBase):\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(DEGeneration, self).__init__()\n        config_dict = {}\n        if config != \"\":\n            if ',' in config:\n                key_value_pairs = config.split(',')\n            else:\n                key_value_pairs = [config]\n            for pair in key_value_pairs:\n                key, value = pair.split(':')\n                config_dict[key.strip()] = value.strip()\n        if 'k' in config_dict:\n            self.k = int(config_dict['k'])\n        else:\n            self.k = 1\n        if 'n' in config_dict:\n            self.pop_size = 4 + math.floor(3 * math.log(int(config_dict['n'])))\n        else:\n            self.pop_size = 10\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical' or 'integer':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n\n        if self.ea is None:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n            self.ea = DE(self.pop_size)\n            self.ea.setup(self.problem, verbose=False)\n        else:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n\n    def optimize(self, duplicate_manager=None):\n        for i in range(self.k):\n            pop = self.ea.ask()\n        self.ea.evaluator.eval(self.problem, pop)\n        pop_X = np.array([p.X for p in pop])\n        pop_F = np.array([p.F for p in pop])\n        top_k_idx = range(len(pop_X))\n        elites = pop_X\n        elites_F = pop_F\n        return elites, elites_F\n\n    def _compute_acq(self, x):\n        raise NotImplementedError()\n\n    def _compute_acq_withGradients(self, x):\n        raise NotImplementedError()\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/acquisition_function/model_manage/DEPreSelect.py",
    "content": "import math\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom GPyOpt import Design_space\nfrom pymoo.algorithms.soo.nonconvex.de import DE\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('DE-PreSelect')\nclass DEPreSelect(AcquisitionBase):\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(DEPreSelect, self).__init__()\n        config_dict = {}\n        if config != \"\":\n            if ',' in config:\n                key_value_pairs = config.split(',')\n            else:\n                key_value_pairs = [config]\n            for pair in key_value_pairs:\n                key, value = pair.split(':')\n                config_dict[key.strip()] = value.strip()\n        if 'k' in config_dict:\n            self.k = int(config_dict['k'])\n        else:\n            self.k = 2\n        if 'n' in config_dict:\n            self.pop_size = 4 + math.floor(3 * math.log(int(config_dict['n'])))\n        else:\n            self.pop_size = 10\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical' or 'integer':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n\n        if self.ea is None:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n            self.ea = DE(self.pop_size)\n            self.ea.setup(self.problem, verbose=False)\n        else:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n\n    def optimize(self, duplicate_manager=None):\n        pop = self.ea.ask()\n        self.ea.evaluator.eval(self.problem, pop)\n        pop_X = np.array([p.X for p in pop])\n        pop_F = np.array([p.F for p in pop])\n        total_pop_X = pop_X\n        total_pop_F = pop_F\n        for i in range(self.k - 1):\n            pop = self.ea.ask()\n            self.ea.evaluator.eval(self.problem, pop)\n            pop_X = np.array([p.X for p in pop])\n            pop_F = np.array([p.F for p in pop])\n            total_pop_X = np.concatenate((total_pop_X, pop_X))\n            total_pop_F = np.concatenate((total_pop_F, pop_F))\n        top_k_idx = sorted(range(len(total_pop_F)), key=lambda i: total_pop_F[i])[:self.pop_size]\n        elites = total_pop_X[top_k_idx]\n        elites_F = total_pop_F[top_k_idx]\n        return elites, elites_F\n\n    def _compute_acq(self, x):\n        raise NotImplementedError()\n\n    def _compute_acq_withGradients(self, x):\n        raise NotImplementedError()\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/acquisition_function/model_manage/GABest.py",
    "content": "import math\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom GPyOpt import Design_space\nfrom pymoo.algorithms.soo.nonconvex.ga import GA\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('GA-Best')\nclass GABest(AcquisitionBase):\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(GABest, self).__init__()\n        config_dict = {}\n        if config != \"\":\n            if ',' in config:\n                key_value_pairs = config.split(',')\n            else:\n                key_value_pairs = [config]\n            for pair in key_value_pairs:\n                key, value = pair.split(':')\n                config_dict[key.strip()] = value.strip()\n        if 'k' in config_dict:\n            self.k = int(config_dict['k'])\n        else:\n            self.k = 2\n        if 'n' in config_dict:\n            self.pop_size = 4 + math.floor(3 * math.log(int(config_dict['n'])))\n        else:\n            self.pop_size = 10\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical' or 'integer':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n\n        if self.ea is None:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n            self.ea = GA(self.pop_size)\n            self.ea.setup(self.problem, verbose=False)\n        else:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n\n    def optimize(self, duplicate_manager=None):\n        pop = self.ea.ask()\n        self.ea.evaluator.eval(self.problem, pop)\n        pop_X = np.array([p.X for p in pop])\n        pop_F = np.array([p.F for p in pop])\n        top_k_idx = sorted(range(len(pop_F)), key=lambda i: pop_F[i])[:self.k]\n        elites = pop_X[top_k_idx]\n        elites_F = pop_F[top_k_idx]\n        return elites, elites_F\n\n    def _compute_acq(self, x):\n        raise NotImplementedError()\n\n    def _compute_acq_withGradients(self, x):\n        raise NotImplementedError()\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/acquisition_function/model_manage/GAGeneration.py",
    "content": "import math\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom GPyOpt import Design_space\nfrom pymoo.algorithms.soo.nonconvex.ga import GA\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('GA-Generation')\nclass GAGeneration(AcquisitionBase):\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(GAGeneration, self).__init__()\n        config_dict = {}\n        if config != \"\":\n            if ',' in config:\n                key_value_pairs = config.split(',')\n            else:\n                key_value_pairs = [config]\n            for pair in key_value_pairs:\n                key, value = pair.split(':')\n                config_dict[key.strip()] = value.strip()\n        if 'k' in config_dict:\n            self.k = int(config_dict['k'])\n        else:\n            self.k = 1\n        if 'n' in config_dict:\n            self.pop_size = 4 + math.floor(3 * math.log(int(config_dict['n'])))\n        else:\n            self.pop_size = 10\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical' or 'integer':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n\n        if self.ea is None:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n            self.ea = GA(self.pop_size)\n            self.ea.setup(self.problem, verbose=False)\n        else:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n\n    def optimize(self, duplicate_manager=None):\n        for i in range(self.k):\n            pop = self.ea.ask()\n        self.ea.evaluator.eval(self.problem, pop)\n        pop_X = np.array([p.X for p in pop])\n        pop_F = np.array([p.F for p in pop])\n        top_k_idx = range(len(pop_X))\n        elites = pop_X\n        elites_F = pop_F\n        return elites, elites_F\n\n    def _compute_acq(self, x):\n        raise NotImplementedError()\n\n    def _compute_acq_withGradients(self, x):\n        raise NotImplementedError()\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/acquisition_function/model_manage/GAPreSelect.py",
    "content": "import math\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom GPyOpt import Design_space\nfrom pymoo.algorithms.soo.nonconvex.ga import GA\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('GA-PreSelect')\nclass GAPreSelect(AcquisitionBase):\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(GAPreSelect, self).__init__()\n        config_dict = {}\n        if config != \"\":\n            if ',' in config:\n                key_value_pairs = config.split(',')\n            else:\n                key_value_pairs = [config]\n            for pair in key_value_pairs:\n                key, value = pair.split(':')\n                config_dict[key.strip()] = value.strip()\n        if 'k' in config_dict:\n            self.k = int(config_dict['k'])\n        else:\n            self.k = 2\n        if 'n' in config_dict:\n            self.pop_size = 4 + math.floor(3 * math.log(int(config_dict['n'])))\n        else:\n            self.pop_size = 10\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical' or 'integer':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n\n        if self.ea is None:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n            self.ea = GA(self.pop_size)\n            self.ea.setup(self.problem, verbose=False)\n        else:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n\n    def optimize(self, duplicate_manager=None):\n        pop = self.ea.ask()\n        self.ea.evaluator.eval(self.problem, pop)\n        pop_X = np.array([p.X for p in pop])\n        pop_F = np.array([p.F for p in pop])\n        total_pop_X = pop_X\n        total_pop_F = pop_F\n        for i in range(self.k - 1):\n            pop = self.ea.ask()\n            self.ea.evaluator.eval(self.problem, pop)\n            pop_X = np.array([p.X for p in pop])\n            pop_F = np.array([p.F for p in pop])\n            total_pop_X = np.concatenate((total_pop_X, pop_X))\n            total_pop_F = np.concatenate((total_pop_F, pop_F))\n        top_k_idx = sorted(range(len(total_pop_F)), key=lambda i: total_pop_F[i])[:self.pop_size]\n        elites = total_pop_X[top_k_idx]\n        elites_F = total_pop_F[top_k_idx]\n        return elites, elites_F\n\n    def _compute_acq(self, x):\n        raise NotImplementedError()\n\n    def _compute_acq_withGradients(self, x):\n        raise NotImplementedError()\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/acquisition_function/model_manage/PSOBest.py",
    "content": "import math\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom GPyOpt import Design_space\nfrom pymoo.algorithms.soo.nonconvex.pso import PSO\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('PSO-Best')\nclass PSOBest(AcquisitionBase):\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(PSOBest, self).__init__()\n        config_dict = {}\n        if config != \"\":\n            if ',' in config:\n                key_value_pairs = config.split(',')\n            else:\n                key_value_pairs = [config]\n            for pair in key_value_pairs:\n                key, value = pair.split(':')\n                config_dict[key.strip()] = value.strip()\n        if 'k' in config_dict:\n            self.k = int(config_dict['k'])\n        else:\n            self.k = 2\n        if 'n' in config_dict:\n            self.pop_size = 4 + math.floor(3 * math.log(int(config_dict['n'])))\n        else:\n            self.pop_size = 10\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical' or 'integer':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n\n        if self.ea is None:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n            self.ea = PSO(self.pop_size)\n            self.ea.setup(self.problem, verbose=False)\n        else:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n\n    def optimize(self, duplicate_manager=None):\n        pop = self.ea.ask()\n        self.ea.evaluator.eval(self.problem, pop)\n        pop_X = np.array([p.X for p in pop])\n        pop_F = np.array([p.F for p in pop])\n        top_k_idx = sorted(range(len(pop_F)), key=lambda i: pop_F[i])[:self.k]\n        elites = pop_X[top_k_idx]\n        elites_F = pop_F[top_k_idx]\n        return elites, elites_F\n\n    def _compute_acq(self, x):\n        raise NotImplementedError()\n\n    def _compute_acq_withGradients(self, x):\n        raise NotImplementedError()\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/acquisition_function/model_manage/PSOGeneration.py",
    "content": "import math\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom GPyOpt import Design_space\nfrom pymoo.algorithms.soo.nonconvex.pso import PSO\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('PSO-Generation')\nclass PSOGeneration(AcquisitionBase):\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(PSOGeneration, self).__init__()\n        config_dict = {}\n        if config != \"\":\n            if ',' in config:\n                key_value_pairs = config.split(',')\n            else:\n                key_value_pairs = [config]\n            for pair in key_value_pairs:\n                key, value = pair.split(':')\n                config_dict[key.strip()] = value.strip()\n        if 'k' in config_dict:\n            self.k = int(config_dict['k'])\n        else:\n            self.k = 1\n        if 'n' in config_dict:\n            self.pop_size = 4 + math.floor(3 * math.log(int(config_dict['n'])))\n        else:\n            self.pop_size = 10\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical' or 'integer':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n\n        if self.ea is None:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n            self.ea = PSO(self.pop_size)\n            self.ea.setup(self.problem, verbose=False)\n        else:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n\n    def optimize(self, duplicate_manager=None):\n        for i in range(self.k):\n            pop = self.ea.ask()\n        self.ea.evaluator.eval(self.problem, pop)\n        pop_X = np.array([p.X for p in pop])\n        pop_F = np.array([p.F for p in pop])\n        top_k_idx = range(len(pop_X))\n        elites = pop_X\n        elites_F = pop_F\n        return elites, elites_F\n\n    def _compute_acq(self, x):\n        raise NotImplementedError()\n\n    def _compute_acq_withGradients(self, x):\n        raise NotImplementedError()\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/acquisition_function/model_manage/PSOPreSelect.py",
    "content": "import math\nimport numpy as np\nfrom pymoo.core.problem import Problem\nfrom GPyOpt import Design_space\nfrom pymoo.algorithms.soo.nonconvex.pso import PSO\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('PSO-PreSelect')\nclass PSOPreSelect(AcquisitionBase):\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(PSOPreSelect, self).__init__()\n        config_dict = {}\n        if config != \"\":\n            if ',' in config:\n                key_value_pairs = config.split(',')\n            else:\n                key_value_pairs = [config]\n            for pair in key_value_pairs:\n                key, value = pair.split(':')\n                config_dict[key.strip()] = value.strip()\n        if 'k' in config_dict:\n            self.k = int(config_dict['k'])\n        else:\n            self.k = 2\n        if 'n' in config_dict:\n            self.pop_size = 4 + math.floor(3 * math.log(int(config_dict['n'])))\n        else:\n            self.pop_size = 10\n        self.model = None\n        self.ea = None\n        self.problem = None\n\n    def link_space(self, space):\n        opt_space = []\n        for var_name in space.variables_order:\n            var_dic = {\n                'name': var_name,\n                'type': 'continuous',\n                'domain': space[var_name].search_space_range,\n            }\n            if space[var_name].type == 'categorical' or 'integer':\n                var_dic['type'] = 'discrete'\n\n            opt_space.append(var_dic.copy())\n            \n        self.space = Design_space(opt_space)\n\n        if self.ea is None:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n            self.ea = PSO(self.pop_size)\n            self.ea.setup(self.problem, verbose=False)\n        else:\n            self.problem = EAProblem(self.space.config_space, self.model.predict)\n\n    def optimize(self, duplicate_manager=None):\n        pop = self.ea.ask()\n        self.ea.evaluator.eval(self.problem, pop)\n        pop_X = np.array([p.X for p in pop])\n        pop_F = np.array([p.F for p in pop])\n        total_pop_X = pop_X\n        total_pop_F = pop_F\n        for i in range(self.k - 1):\n            pop = self.ea.ask()\n            self.ea.evaluator.eval(self.problem, pop)\n            pop_X = np.array([p.X for p in pop])\n            pop_F = np.array([p.F for p in pop])\n            total_pop_X = np.concatenate((total_pop_X, pop_X))\n            total_pop_F = np.concatenate((total_pop_F, pop_F))\n        top_k_idx = sorted(range(len(total_pop_F)), key=lambda i: total_pop_F[i])[:self.pop_size]\n        elites = total_pop_X[top_k_idx]\n        elites_F = total_pop_F[top_k_idx]\n        return elites, elites_F\n\n    def _compute_acq(self, x):\n        raise NotImplementedError()\n\n    def _compute_acq_withGradients(self, x):\n        raise NotImplementedError()\n\n\nclass EAProblem(Problem):\n    def __init__(self, space, predict):\n        input_dim = len(space)\n        xl = []\n        xu = []\n        for var_info in space:\n            var_domain = var_info['domain']\n            xl.append(var_domain[0])\n            xu.append(var_domain[1])\n        xl = np.array(xl)\n        xu = np.array(xu)\n        self.predict = predict\n        super().__init__(n_var=input_dim, n_obj=1, xl=xl, xu=xu)\n\n    def _evaluate(self, x, out, *args, **kwargs):\n        out[\"F\"], _ = self.predict(x)"
  },
  {
    "path": "transopt/optimizer/acquisition_function/moeadego.py",
    "content": "import GPy\nimport numpy as np\nimport scipy.optimize as opt\nfrom scipy.stats import *\nfrom scipy.spatial import distance\nfrom GPyOpt.util.general import get_quantiles\n\nfrom agent.registry import acf_registry\nfrom transopt.utils.hypervolume import calc_hypervolume\nfrom GPyOpt.optimization.acquisition_optimizer import AcquisitionOptimizer\n\n\n@acf_registry.register(\"MOEADEGO\")\nclass MOEADEGO:\n    def __init__(self, model, space, optimizer, config):\n        self.optimizer = optimizer\n        self.model = model\n        self.model_id = 0\n        if 'jitter' in config:\n            self.jitter = config['jitter']\n        else:\n            self.jitter = 0.1\n\n        if 'threshold' in config:\n            self.threshold = config['threshold']\n        else:\n            self.threshold = 0\n    def _compute_acq(self, x):\n        m, s = self.model.predict_by_id(x, self.model_id)\n        fmin = self.model.get_fmin_by_id(self.model_id)\n        phi, Phi, u = get_quantiles(self.jitter, fmin, m, s)\n        f_acqu_ei = s * (u * Phi + phi)\n\n        return -f_acqu_ei\n\n    def set_model_id(self, idx):\n        self.model_id = idx\n    def optimize(self, duplicate_manager=None):\n        space = self.model.search_space\n        self.acquisition_optimizer = AcquisitionOptimizer(space, 'lbfgs')  ## more arguments may come here\n        suggested_sample = []\n        suggested_acfvalue = []\n        for i in range(len(self.model.model_list)):\n            self.set_model_id(i)\n            suggest_x, acf_value = self.acquisition_optimizer.optimize(self._compute_acq)\n            suggested_sample.append(suggest_x)\n            suggested_acfvalue.append(acf_value)\n        suggested_sample = np.vstack(suggested_sample)\n        suggested_acfvalue = np.vstack(suggested_acfvalue)\n        return suggested_sample, suggested_acfvalue\n\n"
  },
  {
    "path": "transopt/optimizer/acquisition_function/pi.py",
    "content": "import copy\n\nfrom GPyOpt.core.task.cost import constant_cost_withGradients\nfrom GPyOpt.util.general import get_quantiles\n\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n@acf_registry.register('PI')\nclass AcquisitionPI(AcquisitionBase):\n    \"\"\"\n    General template to create a new GPyOPt acquisition function\n\n    :param model: GPyOpt class of model\n    :param space: GPyOpt class of domain\n    :param optimizer: optimizer of the acquisition. Should be a GPyOpt optimizer\n    :param cost_withGradients: function that provides the evaluation cost and its gradients\n\n    \"\"\"\n    # --- Set this line to true if analytical gradients are available\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(AcquisitionPI, self).__init__()\n        if 'jitter' in config:\n            self.jitter = config['jitter']\n        else:\n            self.jitter = 0.01\n        if 'threshold' in config:\n            self.threshold = config['threshold']\n        else:\n            self.threshold = 0\n\n        self.cost_withGradients = constant_cost_withGradients\n\n    def _compute_acq(self, x):\n\n        m, s = self.model.predict(x)\n        fmin = self.model.get_fmin()\n        phi, Phi, u = get_quantiles(self.jitter, fmin, m, s)\n        f_acqu_pi = Phi\n\n        return f_acqu_pi\n\n    def _compute_acq_withGradients(self, x):\n        # --- DEFINE YOUR AQUISITION (TO BE MAXIMIZED) AND ITS GRADIENT HERE HERE\n        #\n        # Compute here the value of the new acquisition function. Remember that x is a 2D  numpy array\n        # with a point in the domanin in each row. f_acqu_x should be a column vector containing the\n        # values of the acquisition at x. df_acqu_x contains is each row the values of the gradient of the\n        # acquisition at each point of x.\n        #\n        # NOTE: this function is optional. If note available the gradients will be approxiamted numerically.\n        raise NotImplementedError()\n"
  },
  {
    "path": "transopt/optimizer/acquisition_function/piei.py",
    "content": "import copy\n\nfrom GPyOpt.acquisitions.base import AcquisitionBase\nfrom GPyOpt.core.task.cost import constant_cost_withGradients\nfrom GPyOpt.util.general import get_quantiles\nimport numpy as np\nfrom scipy.stats import norm\nfrom GPyOpt.acquisitions.LCB import AcquisitionLCB\n\nclass AcquisitionpiEI(AcquisitionBase):\n    \"\"\"\n    General template to create a new GPyOPt acquisition function\n\n    :param model: GPyOpt class of model\n    :param space: GPyOpt class of domain\n    :param optimizer: optimizer of the acquisition. Should be a GPyOpt optimizer\n    :param cost_withGradients: function that provides the evaluation cost and its gradients\n\n    \"\"\"\n    # --- Set this line to true if analytical gradients are available\n    analytical_gradient_prediction = False\n\n    def __init__(self, Model, space, optimizer, cost_withGradients=None, jitter=0.01, threshold=0.):\n        self.optimizer = optimizer\n        super(AcquisitionpiEI, self).__init__(Model, space, optimizer)\n        self.Model = Model\n        self.jitter = jitter\n        self.threshold = threshold\n        if cost_withGradients is None:\n            self.cost_withGradients = constant_cost_withGradients\n        else:\n            print('EIC acquisition does now make sense with cost at present. Cost set to constant.')\n            self.cost_withGradients = constant_cost_withGradients\n\n    def _compute_acq(self, x):\n\n        m, s = self.Model.predict(x)\n        # fmin = self.CBOModel.get_valid_fmin()\n        fmin = self.Model.get_fmin()\n        phi, Phi, u = get_quantiles(self.jitter, fmin, m, s)\n        f_acqu_ei = s * (u * Phi + phi)\n\n        return f_acqu_ei * self._compute_prior(x)\n\n    def _compute_prior(self, x):\n        return 1\n\n    def _compute_acq_withGradients(self, x):\n        # --- DEFINE YOUR AQUISITION (TO BE MAXIMIZED) AND ITS GRADIENT HERE HERE\n        #\n        # Compute here the value of the new acquisition function. Remember that x is a 2D  numpy array\n        # with a point in the domanin in each row. f_acqu_x should be a column vector containing the\n        # values of the acquisition at x. df_acqu_x contains is each row the values of the gradient of the\n        # acquisition at each point of x.\n        #\n        # NOTE: this function is optional. If note available the gradients will be approxiamted numerically.\n        raise NotImplementedError()\n"
  },
  {
    "path": "transopt/optimizer/acquisition_function/sequential.py",
    "content": "from GPyOpt.core.evaluators.base import EvaluatorBase\n\n\nclass Sequential(EvaluatorBase):\n    \"\"\"\n    Class for standard Sequential Bayesian optimization methods.\n\n    :param acquisition: acquisition function to be used to compute the batch.\n    :param batch size: it is 1 by default since this class is only used for sequential methods.\n    \"\"\"\n\n    def __init__(self, acquisition, batch_size=1):\n        super(Sequential, self).__init__(acquisition, batch_size)\n\n    def compute_batch(self, duplicate_manager=None,context_manager=None):\n        \"\"\"\n        Selects the new location to evaluate the objective.\n        \"\"\"\n        x, acq_value = self.acquisition.optimize(duplicate_manager=duplicate_manager)\n        return x, acq_value\n\n\n# class Sequential_Tabular(EvaluatorBase):\n#     \"\"\"\n#     Class for standard Sequential Bayesian optimization methods.\n#\n#     :param acquisition: acquisition function to be used to compute the batch.\n#     :param batch size: it is 1 by default since this class is only used for sequential methods.\n#     \"\"\"\n#\n#     def __init__(self, acquisition, batch_size=1):\n#         super(Sequential_Tabular, self).__init__(acquisition, batch_size)\n#\n#     def compute_batch(self, X, unobserved_indexes):\n#         \"\"\"\n#         Selects the new location to evaluate the objective.\n#         \"\"\"\n#         acq_value = self.acquisition._compute_acq(X)\n#         min_index = np.argmin(acq_value)\n#         return unobserved_indexes[min_index], acq_value[min_index]"
  },
  {
    "path": "transopt/optimizer/acquisition_function/smsego.py",
    "content": "import GPy\nimport numpy as np\nimport scipy.optimize as opt\nfrom scipy.stats import *\nfrom scipy.spatial import distance\nfrom GPyOpt.acquisitions.base import AcquisitionBase\n\nfrom agent.registry import acf_registry\nfrom transopt.utils.hypervolume import calc_hypervolume\n\n\n@acf_registry.register(\"SMSEGO\")\nclass SMSEGO:\n    def __init__(self, model, space, optimizer, config):\n        self.optimizer = optimizer\n        self.model = model\n        self.const = 1 / norm.cdf(0.5 + 1 / 2**self.model.num_objective)\n        self.current_hypervolume = None\n        self.w_ref = None\n\n    def _compute_acq(self, x):\n        if self.w_ref is None:\n            self.w_ref = self.model._Y.max(axis=1) + 1.0e2\n        if self.current_hypervolume is None:\n            self.current_hypervolume = calc_hypervolume(self.model._Y.T, self.w_ref)\n\n        if np.any(np.all(self.model._X == x, axis=1)):\n            return 1.0e5\n        else:\n            mean, var = self.model.predict(x)\n            lcb = mean - self.const * np.sqrt(var)\n            new_y_train = np.hstack((self.model._Y, lcb.T)).T\n\n            new_hypervolume = calc_hypervolume(new_y_train, self.w_ref)\n            smsego = self.current_hypervolume - new_hypervolume\n            # print(smsego)\n            return smsego\n\n    def optimize(self, duplicate_manager=None):\n        x_bounds = self.model._get_var_bound(\"search\")\n        default = np.array([(v[1] + v[0]) / 2 for _, v in x_bounds.items()])\n        bounds = [(v[0], v[1]) for _, v in x_bounds.items()]\n        result = opt.minimize(\n            self._compute_acq, x0=default, bounds=bounds, method=\"L-BFGS-B\"\n        )\n        return result.x[np.newaxis, :], result.fun\n"
  },
  {
    "path": "transopt/optimizer/acquisition_function/taf.py",
    "content": "import copy\n\nimport numpy as np\nfrom GPyOpt.core.task.cost import constant_cost_withGradients\nfrom GPyOpt.util.general import get_quantiles\n\nfrom transopt.agent.registry import acf_registry\nfrom transopt.optimizer.acquisition_function.acf_base import AcquisitionBase\n\n\n@acf_registry.register('TAF')\nclass AcquisitionTAF(AcquisitionBase):\n    \"\"\"\n    General template to create a new GPyOPt acquisition function\n\n    :param model: GPyOpt class of model\n    :param space: GPyOpt class of domain\n    :param optimizer: optimizer of the acquisition. Should be a GPyOpt optimizer\n    :param cost_withGradients: function that provides the evaluation cost and its gradients\n\n    \"\"\"\n    # --- Set this line to true if analytical gradients are available\n    analytical_gradient_prediction = False\n\n    def __init__(self, config):\n        super(AcquisitionTAF, self).__init__()\n        if 'jitter' in config:\n            self.jitter = config['jitter']\n        else:\n            self.jitter = 0.01\n\n        if 'threshold' in config:\n            self.threshold = config['threshold']\n        else:\n            self.threshold = 0\n\n        self.cost_withGradients = constant_cost_withGradients\n\n    def _compute_acq(self, x):\n        n_sample = len(x)\n        source_num = len(self.model._source_gps)\n        n_models = source_num + 1\n        acf_ei = np.empty((n_models, n_sample, 1))\n\n        for task_uid in range(source_num):\n            m, s = self.model._source_gps[task_uid].predict(x)\n            _X = self.model._source_gps[task_uid]._X\n            fmin = self.model._source_gps[task_uid].predict(_X)[0].min()\n            phi, Phi, u = get_quantiles(self.jitter, fmin, m, s)\n            acf_ei[task_uid] =  s * (u * Phi + phi)\n        m,s = self.model.predict(x)\n        for task_uid in range(source_num):\n            acf_ei[task_uid] = acf_ei[task_uid] * self.model._source_gp_weights[task_uid]\n        acf_ei[-1] = self.model._target_model_weight\n        fmin = self.model.get_fmin()\n        phi, Phi, u = get_quantiles(self.jitter, fmin, m, s)\n        acf_ei[-1] = acf_ei[-1] * (s * (u * Phi + phi))\n        f_acqu_ei = np.sum(acf_ei, axis=0)\n\n        return f_acqu_ei\n\n    def _compute_acq_withGradients(self, x):\n        # --- DEFINE YOUR AQUISITION (TO BE MAXIMIZED) AND ITS GRADIENT HERE HERE\n        #\n        # Compute here the value of the new acquisition function. Remember that x is a 2D  numpy array\n        # with a point in the domanin in each row. f_acqu_x should be a column vector containing the\n        # values of the acquisition at x. df_acqu_x contains is each row the values of the gradient of the\n        # acquisition at each point of x.\n        #\n        # NOTE: this function is optional. If note available the gradients will be approxiamted numerically.\n        raise NotImplementedError()\n\n"
  },
  {
    "path": "transopt/optimizer/construct_optimizer.py",
    "content": "\nfrom transopt.agent.registry import (acf_registry, sampler_registry,\n                                     selector_registry, space_refiner_registry,\n                                     model_registry, pretrain_registry, normalizer_registry)\nfrom transopt.optimizer.optimizer_base.bo import BO\n\n\n\ndef ConstructOptimizer(optimizer_config: dict = None, seed: int = 0) -> BO:\n    \n    # if 'SpaceRefinerParameters' not in optimizer_config:\n    #     optimizer_config['SpaceRefinerParameters'] = {}\n    # if 'SamplerParameters' not in optimizer_config:\n    #     optimizer_config['SamplerParameters'] = {}\n    # if 'ACFParameters' not in optimizer_config:\n    #     optimizer_config['ACFParameters'] = {}\n    # if 'ModelParameters' not in optimizer_config:\n    #     optimizer_config['ModelParameters'] = {}\n    # if 'PretrainParameters' not in optimizer_config:\n    #     optimizer_config['PretrainParameters'] = {}\n    # if 'NormalizerParameters' not in optimizer_config:\n    #     optimizer_config['NormalizerParameters'] = {}\n    # if 'SamplerInitNum' not in optimizer_config: \n    #     optimizer_config['SamplerInitNum'] = 11\n            \n    \"\"\"Create the optimizer object.\"\"\"\n    if optimizer_config['SpaceRefiner'] == 'None':\n        SpaceRefiner = None\n    else:\n        if 'SpaceRefinerParameters' not in optimizer_config:\n            optimizer_config['SpaceRefinerParameters'] = {}\n        SpaceRefiner = space_refiner_registry[optimizer_config['SpaceRefiner']](optimizer_config['SpaceRefinerParameters'])\n        \n    \n    Sampler = sampler_registry[optimizer_config['Sampler']](optimizer_config['SamplerInitNum'], optimizer_config['SamplerParameters'])\n    ACF = acf_registry[optimizer_config['ACF']](config = optimizer_config['ACFParameters'])\n\n    # Model = model_registry[optimizer_config['Model']](config = optimizer_config['ModelParameters'])\n    Model = model_registry[optimizer_config['Model']]()\n\n    if optimizer_config['Pretrain'] == 'None':\n        Pretrain = None\n    else:\n        Pretrain = pretrain_registry[optimizer_config['Pretrain']](optimizer_config['PretrainParameters'])\n        \n    \n    \n    if optimizer_config['Normalizer'] == 'None':\n        Normalizer = None\n    else:\n        Normalizer = normalizer_registry[optimizer_config['Normalizer']](optimizer_config['NormalizerParameters'])\n        \n    \n    optimizer = BO(SpaceRefiner, Sampler, ACF, Pretrain, Model, Normalizer, optimizer_config)\n    \n    \n    return optimizer\n\ndef ConstructSelector(optimizer_config, dict = None, seed: int = 0):\n    DataSelectors = {}\n    \n    \n    # if optimizer_config['SpaceRefinerDataSelector'] == 'None':\n    #     DataSelectors['SpaceRefinerDataSelector'] = None\n    # else:\n    #     DataSelectors['SpaceRefinerDataSelector'] = selector_registry(optimizer_config['SpaceRefinerDataSelector'], optimizer_config['SpaceRefinerDataSelectorParameters'])\n    \n    # if optimizer_config['SamplerDataSelector'] == 'None':\n    #     DataSelectors['SamplerDataSelector'] = None\n    # else:\n    #     DataSelectors['SamplerDataSelector'] = selector_registry(optimizer_config['SamplerDataSelector'], optimizer_config['SamplerDataSelectorParameters'])\n    \n    # if optimizer_config['ACFDataSelector'] == 'None':\n    #     DataSelectors['ACFDataSelector'] = None\n    # else:\n    #     DataSelectors['ACFDataSelector'] = selector_registry(optimizer_config['ACFDataSelector'], optimizer_config['ACFDataSelectorParameters'])\n    \n    # if optimizer_config['PretrainDataSelector'] == 'None':\n    #     DataSelectors['PretrainDataSelector'] = None\n    # else:\n    #     DataSelectors['PretrainDataSelector'] = selector_registry(optimizer_config['PretrainDataSelector'], optimizer_config['PretrainDataSelectorParameters'])\n    \n    # if optimizer_config['ModelDataSelector'] == 'None':\n    #     DataSelectors['ModelDataSelector'] = None\n    # else:\n    #     DataSelectors['ModelDataSelector'] = selector_registry(optimizer_config['ModelDataSelector'], optimizer_config['ModelDataSelectorParameters'])\n\n    # if optimizer_config['NormalizerDataSelector'] == 'None':\n    #     DataSelectors['NormalizerDataSelector'] = None\n    # else:\n    #     DataSelectors['NormalizerDataSelector'] = selector_registry(optimizer_config['NormalizerDataSelector'], optimizer_config['NormalizerDataSelectorParameters'])\n    \n    \n    for key in optimizer_config.keys():\n        if key.endswith('DataSelector'):\n            if optimizer_config[key] == 'None':\n                DataSelectors[key] = None\n            else:\n                DataSelectors[key] = selector_registry[optimizer_config[key]](optimizer_config[key + 'Parameters'])\n    return DataSelectors"
  },
  {
    "path": "transopt/optimizer/model/HyperBO.py",
    "content": "import random\nimport time\n\nfrom external.hyperbo.basics import definitions as defs\nfrom external.hyperbo.basics import params_utils\nfrom external.hyperbo.gp_utils import gp\nfrom external.hyperbo.gp_utils import kernel\nfrom external.hyperbo.gp_utils import mean\nfrom external.hyperbo.gp_utils import utils\nfrom external.hyperbo.bo_utils import data\nfrom external.hyperbo.gp_utils import objectives as obj\nimport jax\nimport jax.numpy as jnp\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom typing import Any, Callable, Dict, List, Tuple, Union\n\nfont = {\n    'family': 'serif',\n    'weight': 'normal',\n    'size': 7,\n}\naxes = {'titlesize': 7, 'labelsize': 7}\nmatplotlib.rc('font', **font)\nmatplotlib.rc('axes', **axes)\n\nDEFAULT_WARP_FUNC = utils.DEFAULT_WARP_FUNC\nGPParams = defs.GPParams\nSubDataset = defs.SubDataset\n\nclass hyperbo():\n    def __init__(self, seed = 0):\n        self.mean_func = mean.constant\n        self.cov_func = kernel.squared_exponential\n        self.warp_func = DEFAULT_WARP_FUNC\n        self.key = jax.random.PRNGKey(seed)\n        self._X = None\n        self._Y = None\n\n        self.params = GPParams(\n            model={\n                'constant': 5.,\n                'lengthscale': 1.,\n                'signal_variance': 1.0,\n                'noise_variance': 0.01,\n            },\n            config={\n                'Method': 'adam',\n                'learning_rate': 1e-5,\n                'beta': 0.9,\n                'max_training_step': 1,\n                'batch_size': 100,\n                'retrain': 1,\n            })\n\n    def pretrain(self, Meta_data, Target_data):\n        dataset = {}\n        num_train_functions = len(Meta_data['X'])\n        for sub_dataset_id in range(num_train_functions):\n            x = jax.numpy.array(Meta_data['X'][sub_dataset_id])\n            y = jax.numpy.array(Meta_data['Y'][sub_dataset_id])\n            dataset[str(sub_dataset_id)] = SubDataset(x, y)\n\n        self.target_dataset_id = num_train_functions\n        self._X = Target_data['X']\n        self._Y = Target_data['Y']\n        x = jax.numpy.array(self._X)\n        y = jax.numpy.array(self._Y)\n        dataset[str(self.target_dataset_id)] = SubDataset(x, y)\n\n        self.model = gp.GP(\n            dataset=dataset,\n            params=self.params,\n            mean_func=self.mean_func,\n            cov_func=self.cov_func,\n            warp_func=self.warp_func,\n        )\n        assert self.key is not None, ('Cannot initialize with '\n                                             'init_random_key == None.')\n        key, subkey = jax.random.split(self.key)\n        self.model.initialize_params(subkey)\n        # Infer GP parameters.\n        key, subkey = jax.random.split(self.key)\n        self.model.train(subkey)\n\n    def retrain(self, Target_data):\n        self._X = Target_data['X']\n        self._Y = Target_data['Y']\n        x = jax.numpy.array(self._X)\n        y = jax.numpy.array(self._Y)\n        dataset =  SubDataset(x, y)\n\n        self.model.update_sub_dataset(\n            dataset, sub_dataset_key=str(self.target_dataset_id), is_append=False)\n\n        retrain_condition = 'retrain' in self.model.params.config and self.model.params.config[\n            'retrain'] > 0 and self.model.dataset[str(self.target_dataset_id)].x.shape[0] > 0\n        if not retrain_condition:\n            return\n        if self.model.params.config['objective'] in [obj.regkl, obj.regeuc]:\n            raise ValueError('Objective must include NLL to retrain.')\n        max_training_step = self.model.params.config['retrain']\n        self.model.params.config['max_training_step'] = max_training_step\n        key, subkey = jax.random.split(self.key)\n        self.model.train(subkey)\n\n    def predict(self, X, subset_data_id:Union[int, str] = 0):\n        _X = jnp.array(X)\n        mu, var = self.model.predict(_X, subset_data_id)\n\n        return mu, var\n"
  },
  {
    "path": "transopt/optimizer/model/__init__.py",
    "content": "from transopt.optimizer.model.gp import GP\nfrom transopt.optimizer.model.pr import PR\nfrom transopt.optimizer.model.rf import RF\n\nfrom transopt.optimizer.model.mtgp import MTGP\nfrom transopt.optimizer.model.mhgp import MHGP\nfrom transopt.optimizer.model.rgpe import RGPE\nfrom transopt.optimizer.model.sgpt import SGPT\nfrom transopt.optimizer.model.rbfn import RBFN\n\nfrom transopt.optimizer.model.mlp import MLP\nfrom transopt.optimizer.model.deepkernel import DeepKernelGP\nfrom transopt.optimizer.model.neuralprocess import NeuralProcess\n"
  },
  {
    "path": "transopt/optimizer/model/bohb.py",
    "content": "import copy\n\nimport numpy as np\nimport scipy\nimport statsmodels.api as sm\nimport dask\n\n\nclass KDEMultivariate(sm.nonparametric.KDEMultivariate):\n    def __init__(self, configurations):\n        self.configurations = configurations\n        data = []\n        for config in configurations:\n            data.append(np.array(config.to_list()))\n        data = np.array(data)\n        super().__init__(data, configurations[0].kde_vartypes, 'normal_reference')\n\n\nclass Log():\n    def __init__(self, size):\n        self.size = size\n        self.logs = np.empty(self.size, dtype=dict)\n        self.best = {'loss': np.inf}\n\n    def __getitem__(self, index):\n        return self.logs[index]\n\n    def __setitem__(self, index, value):\n        self.logs[index] = value\n\n    def __repr__(self):\n        string = []\n        string.append(f's_max: {self.size}')\n        for s, log in enumerate(self.logs):\n            string.append(f's: {s}')\n            for budget in log:\n                string.append(f'Budget: {budget}')\n                string.append(f'Loss: {log[budget][\"loss\"]}')\n                string.append(str(log[budget]['hyperparameter']))\n        string.append('Best Hyperparameter Configuration:')\n        string.append(f'Budget: {self.best[\"budget\"]}')\n        string.append(f'Loss: {self.best[\"loss\"]}')\n        string.append(str(self.best['hyperparameter']))\n        return '\\n'.join(string)\n\n\nclass BOHB:\n    def __init__(self, configspace, evaluate, max_budget, min_budget,\n                 eta=3, best_percent=0.15, random_percent=1/3, n_samples=64,\n                 bw_factor=3, min_bandwidth=1e-3, n_proc=1):\n        self.eta = eta\n        self.configspace = configspace\n        self.max_budget = max_budget\n        self.min_budget = min_budget\n        self.evaluate = evaluate\n\n        self.best_percent = best_percent\n        self.random_percent = random_percent\n        self.n_samples = n_samples\n        self.min_bandwidth = min_bandwidth\n        self.bw_factor = bw_factor\n        self.n_proc = n_proc\n\n        self.s_max = int(np.log(self.max_budget/self.min_budget) / np.log(self.eta))\n        self.budget = (self.s_max + 1) * self.max_budget\n\n        self.kde_good = None\n        self.kde_bad = None\n        self.samples = np.array([])\n\n    def optimize(self):\n        logs = Log(self.s_max+1)\n        for s in reversed(range(self.s_max + 1)):\n            logs[s] = {}\n            n = int(np.ceil(\n                (self.budget * (self.eta ** s)) / (self.max_budget * (s + 1))))\n            r = self.max_budget * (self.eta ** -s)\n            self.kde_good = None\n            self.kde_bad = None\n            self.samples = np.array([])\n            for i in range(s+1):\n                n_i = n * self.eta ** (-i)  # Number of configs\n                r_i = r * self.eta ** (i)  # Budget\n                logs[s][r_i] = {'loss': np.inf}\n\n                samples = []\n                losses = []\n                for j in range(n):\n                    sample = self.get_sample()\n                    if self.n_proc > 1:\n                        loss = dask.delayed(self.evaluate)(sample.to_dict(), int(r_i))\n                    else:\n                        loss = self.evaluate(sample.to_dict(), int(r_i))\n                    samples.append(sample)\n                    losses.append(loss)\n                if self.n_proc > 1:\n                    losses = dask.compute(\n                        *losses, scheduler='processes', num_workers=self.n_proc)\n                midx = np.argmin(losses)\n                logs[s][r_i]['loss'] = losses[midx]\n                logs[s][r_i]['hyperparameter'] = samples[midx]\n\n                if logs[s][r_i]['loss'] < logs.best['loss']:\n                    logs.best['loss'] = logs[s][r_i]['loss']\n                    logs.best['budget'] = r_i\n                    logs.best['hyperparameter'] = logs[s][r_i]['hyperparameter']\n\n                n = int(np.ceil(n_i/self.eta))\n                idxs = np.argsort(losses)\n                self.samples = np.array(samples)[idxs[:n]]\n                n_good = int(np.ceil(self.best_percent * len(samples)))\n                if n_good > len(samples[0].kde_vartypes) + 2:\n                    good_data = np.array(samples)[idxs[:n_good]]\n                    bad_data = np.array(samples)[idxs[n_good:]]\n                    self.kde_good = KDEMultivariate(good_data)\n                    self.kde_bad = KDEMultivariate(bad_data)\n                    self.kde_bad.bw = np.clip(\n                        self.kde_bad.bw, self.min_bandwidth, None)\n                    self.kde_good.bw = np.clip(\n                        self.kde_good.bw, self.min_bandwidth, None)\n        return logs\n\n    def get_sample(self):\n        if self.kde_good is None or np.random.random() < self.random_percent:\n            if len(self.samples):\n                idx = np.random.randint(0, len(self.samples))\n                sample = self.samples[idx]\n                self.samples = np.delete(self.samples, idx)\n                return sample\n            else:\n                return self.configspace.sample_configuration()\n\n        # Sample from the good data\n        best_tpe_val = np.inf\n        for _ in range(self.n_samples):\n            idx = np.random.randint(0, len(self.kde_good.configurations))\n            configuration = copy.deepcopy(self.kde_good.configurations[idx])\n            for hyperparameter, bw in zip(configuration, self.kde_good.bw):\n                if hyperparameter.type == cs.Type.Continuous:\n                    value = hyperparameter.value\n                    bw = bw * self.bw_factor\n                    hyperparameter.value = scipy.stats.truncnorm.rvs(\n                        -value/bw, (1-value)/bw, loc=value, scale=bw)\n                elif hyperparameter.type == cs.Type.Discrete:\n                    if np.random.rand() >= (1-bw):\n                        idx = np.random.randint(len(hyperparameter.choices))\n                        hyperparameter.value = idx\n                else:\n                    raise NotImplementedError\n\n            tpe_val = (self.kde_bad.pdf(configuration.to_list()) /\n                       self.kde_good.pdf(configuration.to_list()))\n            if tpe_val < best_tpe_val:\n                best_tpe_val = tpe_val\n                best_configuration = configuration\n\n        return best_configuration"
  },
  {
    "path": "transopt/optimizer/model/deepkernel.py",
    "content": "\n\"\"\"\nThis FSBO implementation is based on the original implementation from Hadi Samer Jomaa\nfor his work on \"Transfer Learning for Bayesian HPOBench with End-to-End Landmark Meta-Features\"\nat the NeurIPS 2021 MetaLearning Workshop \n\nThe implementation for Deep Kernel Learning is based on the original Gpytorch example: \nhttps://docs.gpytorch.ai/en/stable/examples/06_PyTorch_NN_Integration_DKL/KISSGP_Deep_Kernel_Regression_CUDA.html\n\n\"\"\"\n\nimport copy\nimport logging\nimport os\n\nimport gpytorch\nimport numpy as np\nimport torch\nimport torch.nn as nn\nfrom scipy.optimize import differential_evolution\nfrom transopt.agent.registry import model_registry\n\nnp.random.seed(1203)\nRandomQueryGenerator= np.random.RandomState(413)\nRandomSupportGenerator= np.random.RandomState(413)\nRandomTaskGenerator = np.random.RandomState(413)\n\n\n\nclass Metric(object):\n    def __init__(self,prefix='train: '):\n        self.reset()\n        self.message=prefix + \"loss: {loss:.2f} - noise: {log_var:.2f} - mse: {mse:.2f}\"\n        \n    def update(self,loss,noise,mse):\n        self.loss.append(np.asscalar(loss))\n        self.noise.append(np.asscalar(noise))\n        self.mse.append(np.asscalar(mse))\n    \n    def reset(self,):\n        self.loss = []\n        self.noise = []\n        self.mse = []\n    \n    def report(self):\n        return self.message.format(loss=np.mean(self.loss),\n                            log_var=np.mean(self.noise),\n                            mse=np.mean(self.mse))\n    \n    def get(self):\n        return {\"loss\":np.mean(self.loss),\n                \"noise\":np.mean(self.noise),\n                \"mse\":np.mean(self.mse)}\n    \n\ndef totorch(x,device):\n\n    return torch.Tensor(x).to(device)    \n\nclass MLP(nn.Module):\n    def __init__(self, input_size, hidden_size=[32,32,32,32], dropout=0.0):\n        \n        super(MLP, self).__init__()\n        self.nonlinearity = nn.ReLU()\n        self.fc = nn.ModuleList([nn.Linear(in_features=input_size, out_features=hidden_size[0])])\n        for d_out in hidden_size[1:]:\n            self.fc.append(nn.Linear(in_features=self.fc[-1].out_features, out_features=d_out))\n        self.out_features = hidden_size[-1]\n        self.dropout = nn.Dropout(dropout)\n    def forward(self,x):\n        \n        for fc in self.fc[:-1]:\n            x = fc(x)\n            x = self.dropout(x)\n            x = self.nonlinearity(x)\n        x = self.fc[-1](x)\n        x = self.dropout(x)\n        return x\n\nclass ExactGPLayer(gpytorch.models.ExactGP):\n    def __init__(self, train_x, train_y, likelihood,config,dims ):\n        super(ExactGPLayer, self).__init__(train_x, train_y, likelihood)\n        self.mean_module  = gpytorch.means.ConstantMean()\n\n        if(config[\"kernel\"]=='rbf' or config[\"kernel\"]=='RBF'):\n            self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel(ard_num_dims=dims if config[\"ard\"] else None))\n        elif(config[\"kernel\"]=='matern'):\n            self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.MaternKernel(nu=config[\"nu\"],ard_num_dims=dims if config[\"ard\"] else None))\n        else:\n            raise ValueError(\"[ERROR] the kernel '\" + str(config[\"kernel\"]) + \"' is not supported for regression, use 'rbf' or 'spectral'.\")\n            \n    def forward(self, x):\n        mean_x  = self.mean_module(x)\n        covar_x = self.covar_module(x)\n        return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)\n    \n    \n@model_registry.register(\"DeepKernelGP\")\nclass DeepKernelGP(nn.Module):\n    def __init__(self, config = {}):\n        super(DeepKernelGP, self).__init__()\n\n        if len(config) == 0:\n            self.config = {\"kernel\": \"matern\", 'ard': False, \"nu\": 2.5, 'hidden_size': [32,32,32,32], 'n_inner_steps': 1,\n                           'test_batch_size':1, 'batch_size':1, 'seed':0, 'eval_batch_size':1000, 'verbose':True, 'loss_tol':0.0001,\n                           'max_patience':16, 'lr':0.001, 'epochs':100, 'load_model': False, 'checkpoint_path': './external/model/FSBO/Seed_0_1'}\n        else:\n            self.config = config\n        torch.manual_seed(self.config['seed'])\n        \n        self.device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n        self.hidden_size = self.config['hidden_size']\n        self.kernel_config = {\"kernel\": self.config['kernel'], \"ard\": self.config['ard'], \"nu\": self.config['nu']}\n        self.max_patience = self.config['max_patience']\n        self.lr  = self.config['lr']\n        self.load_model = self.config['load_model']\n        self.checkpoint = self.config['checkpoint_path']\n        \n        self.epochs = self.config['epochs']\n        self.verbose = self.config['verbose']\n        self.loss_tol = self.config['loss_tol']\n        self.eval_batch_size = self.config['eval_batch_size']\n        self.has_model = False\n\n\n    def get_model_likelihood_mll(self, train_size):\n        \n        train_x=torch.ones(train_size, self.feature_extractor.out_features).to(self.device)\n        train_y=torch.ones(train_size).to(self.device)\n\n        likelihood = gpytorch.likelihoods.GaussianLikelihood()\n        model = ExactGPLayer(train_x=train_x, train_y=train_y, likelihood=likelihood, config=self.kernel_config,dims = self.feature_extractor.out_features)\n        self.model = model.to(self.device)\n        self.likelihood = likelihood.to(self.device)\n        self.mll        = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model).to(self.device)\n\n\n\n    def fit(self,\n            X: np.ndarray,\n            Y: np.ndarray,\n            optimize: bool = False,):\n\n        self.X_obs, self.y_obs = totorch(X, self.device), totorch(Y, self.device).reshape(-1)\n        \n        if self.load_model:\n            assert(self.checkpoint is not None)\n            print(\"Model_loaded\")\n            self.load_checkpoint(os.path.join(self.checkpoint,\"weights\"))\n\n        if self.has_model == False:\n            self.input_size = X.shape[1]\n            self.feature_extractor = MLP(self.input_size, hidden_size = self.hidden_size).to(self.device)\n            self.get_model_likelihood_mll(1)\n            self.has_model = True\n        \n        losses = [np.inf]\n        best_loss = np.inf\n        weights = copy.deepcopy(self.state_dict())\n        patience=0\n        optimizer = torch.optim.Adam([{'params': self.model.parameters(), 'lr': self.lr},\n                                {'params': self.feature_extractor.parameters(), 'lr': self.lr}])\n                    \n        for _ in range(self.epochs):\n            optimizer.zero_grad()\n            z = self.feature_extractor(self.X_obs)\n            self.model.set_train_data(inputs=z, targets=self.y_obs, strict=False)\n            predictions = self.model(z)\n            try:\n                loss = -self.mll(predictions, self.model.train_targets)\n                loss.backward()\n                optimizer.step()\n            except Exception as e:\n                raise e\n            \n            if self.verbose:\n                print(\"Iter {iter}/{epochs} - Loss: {loss:.5f}   noise: {noise:.5f}\".format(\n                    iter=_+1,epochs=self.epochs,loss=loss.item(),noise=self.likelihood.noise.item()))                \n            losses.append(loss.detach().to(\"cpu\").item())\n            if best_loss>losses[-1]:\n                best_loss = losses[-1]\n                weights = copy.deepcopy(self.state_dict())\n            if np.allclose(losses[-1],losses[-2],atol=self.loss_tol):\n                patience+=1\n            else:\n                patience=0\n            if patience>self.max_patience:\n                break\n        self.load_state_dict(weights)\n        return losses\n    \n    def load_checkpoint(self, checkpoint):\n        ckpt = torch.load(checkpoint,map_location=torch.device(self.device))\n        self.model.load_state_dict(ckpt['gp'],strict=False)\n        self.likelihood.load_state_dict(ckpt['likelihood'],strict=False)\n        self.feature_extractor.load_state_dict(ckpt['net'],strict=False)\n        \n\n    def predict(self, X_pen):\n        \n        query_X = totorch(X_pen, self.device)\n        self.model.eval()\n        self.feature_extractor.eval()\n        self.likelihood.eval()        \n\n        z_support = self.feature_extractor(self.X_obs).detach()\n        self.model.set_train_data(inputs=z_support, targets=self.y_obs, strict=False)\n\n        with torch.no_grad():\n            z_query = self.feature_extractor(query_X).detach()\n            pred    = self.likelihood(self.model(z_query))\n\n            \n        mu    = pred.mean.detach().to(\"cpu\").numpy()[: ,np.newaxis]\n        stddev = pred.stddev.detach().to(\"cpu\").numpy()[: ,np.newaxis]\n        \n        return mu,stddev\n\n    def continuous_maximization( self, dim, bounds, acqf):\n\n        result = differential_evolution(acqf, bounds=bounds, updating='immediate',workers=1, maxiter=20000, init=\"sobol\")\n        return result.x.reshape(-1,dim)\n\n\n    def get_fmin(self):\n        return np.min(self.y_obs.detach().to(\"cpu\").numpy())\n"
  },
  {
    "path": "transopt/optimizer/model/dyhpo.py",
    "content": "import logging\nimport os\nfrom copy import deepcopy\nfrom typing import Dict, Tuple\n\nimport gpytorch\nimport numpy as np\nimport torch\nimport torch.nn as nn\nfrom torch import cat\n\nfrom transopt.agent.registry import model_registry\n\n\nclass FeatureExtractor(nn.Module):\n    \"\"\"\n    The feature extractor that is part of the deep kernel.\n    \"\"\"\n    def __init__(self, configuration):\n        super(FeatureExtractor, self).__init__()\n\n        self.configuration = configuration\n\n        self.nr_layers = configuration['nr_layers']\n        self.act_func = nn.LeakyReLU()\n        # adding one to the dimensionality of the initial input features\n        # for the concatenation with the budget.\n        initial_features = configuration['nr_initial_features'] + 1\n        self.fc1 = nn.Linear(initial_features, configuration['layer1_units'])\n        self.bn1 = nn.BatchNorm1d(configuration['layer1_units'])\n        for i in range(2, self.nr_layers):\n            setattr(\n                self,\n                f'fc{i + 1}',\n                nn.Linear(configuration[f'layer{i - 1}_units'], configuration[f'layer{i}_units']),\n            )\n            setattr(\n                self,\n                f'bn{i + 1}',\n                nn.BatchNorm1d(configuration[f'layer{i}_units']),\n            )\n\n\n        setattr(\n            self,\n            f'fc{self.nr_layers}',\n            nn.Linear(\n                configuration[f'layer{self.nr_layers - 1}_units'] +\n                configuration['cnn_nr_channels'],  # accounting for the learning curve features\n                configuration[f'layer{self.nr_layers}_units']\n            ),\n        )\n        self.cnn = nn.Sequential(\n            nn.Conv1d(in_channels=1, kernel_size=(configuration['cnn_kernel_size'],), out_channels=4),\n            nn.AdaptiveMaxPool1d(1),\n        )\n\n    def forward(self, x, budgets, learning_curves):\n\n        # add an extra dimensionality for the budget\n        # making it nr_rows x 1.\n        budgets = torch.unsqueeze(budgets, dim=1)\n        # concatenate budgets with examples\n        x = cat((x, budgets), dim=1)\n        x = self.fc1(x)\n        x = self.act_func(self.bn1(x))\n\n        for i in range(2, self.nr_layers):\n            x = self.act_func(\n                getattr(self, f'bn{i}')(\n                    getattr(self, f'fc{i}')(\n                        x\n                    )\n                )\n            )\n\n        # add an extra dimensionality for the learning curve\n        # making it nr_rows x 1 x lc_values.\n        learning_curves = torch.unsqueeze(learning_curves, 1)\n        lc_features = self.cnn(learning_curves)\n        # revert the output from the cnn into nr_rows x nr_kernels.\n        lc_features = torch.squeeze(lc_features, 2)\n\n        # put learning curve features into the last layer along with the higher level features.\n        x = cat((x, lc_features), dim=1)\n        x = self.act_func(getattr(self, f'fc{self.nr_layers}')(x))\n\n        return x\n\n\nclass GPRegressionModel(gpytorch.models.ExactGP):\n    \"\"\"\n    A simple GP model.\n    \"\"\"\n    def __init__(\n        self,\n        train_x: torch.Tensor,\n        train_y: torch.Tensor,\n        likelihood: gpytorch.likelihoods.GaussianLikelihood,\n    ):\n        \"\"\"\n        Constructor of the GPRegressionModel.\n\n        Args:\n            train_x: The initial train examples for the GP.\n            train_y: The initial train labels for the GP.\n            likelihood: The likelihood to be used.\n        \"\"\"\n        super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)\n\n        self.mean_module = gpytorch.means.ConstantMean()\n        self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())\n\n    def forward(self, x):\n\n        mean_x = self.mean_module(x)\n        covar_x = self.covar_module(x)\n\n        return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)\n\n\nclass DyHPO:\n    \"\"\"\n    The DyHPO DeepGP model.\n    \"\"\"\n    def __init__(\n        self,\n        configuration: Dict,\n        device: torch.device,\n        dataset_name: str = 'unknown',\n        output_path: str = '.',\n        seed: int = 11,\n    ):\n        \"\"\"\n        The constructor for the DyHPO model.\n\n        Args:\n            configuration: The configuration to be used\n                for the different parts of the surrogate.\n            device: The device where the experiments will be run on.\n            dataset_name: The name of the dataset for the current run.\n            output_path: The path where the intermediate/final results\n                will be stored.\n            seed: The seed that will be used to store the checkpoint\n                properly.\n        \"\"\"\n        super(DyHPO, self).__init__()\n        self.feature_extractor = FeatureExtractor(configuration)\n        self.batch_size = configuration['batch_size']\n        self.nr_epochs = configuration['nr_epochs']\n        self.early_stopping_patience = configuration['nr_patience_epochs']\n        self.refine_epochs = 50\n        self.dev = device\n        self.seed = seed\n        self.model, self.likelihood, self.mll = \\\n            self.get_model_likelihood_mll(\n                configuration[f'layer{self.feature_extractor.nr_layers}_units']\n            )\n\n        self.model.to(self.dev)\n        self.likelihood.to(self.dev)\n        self.feature_extractor.to(self.dev)\n\n        self.optimizer = torch.optim.Adam([\n            {'params': self.model.parameters(), 'lr': configuration['learning_rate']},\n            {'params': self.feature_extractor.parameters(), 'lr': configuration['learning_rate']}],\n        )\n\n        self.configuration = configuration\n        # the number of initial points for which we will retrain fully from scratch\n        # This is basically equal to the dimensionality of the search space + 1.\n        self.initial_nr_points = 10\n        # keeping track of the total hpo iterations. It will be used during the optimization\n        # process to switch from fully training the model, to refining.\n        self.iterations = 0\n        # flag for when the optimization of the model should start from scratch.\n        self.restart = True\n\n        self.logger = logging.getLogger(__name__)\n\n        self.checkpoint_path = os.path.join(\n            output_path,\n            'checkpoints',\n            f'{dataset_name}',\n            f'{self.seed}',\n        )\n\n        os.makedirs(self.checkpoint_path, exist_ok=True)\n\n        self.checkpoint_file = os.path.join(\n            self.checkpoint_path,\n            'checkpoint.pth'\n        )\n\n    def restart_optimization(self):\n        \"\"\"\n        Restart the surrogate model from scratch.\n        \"\"\"\n        self.feature_extractor = FeatureExtractor(self.configuration).to(self.dev)\n        self.model, self.likelihood, self.mll = \\\n            self.get_model_likelihood_mll(\n                self.configuration[f'layer{self.feature_extractor.nr_layers}_units'],\n            )\n\n        self.optimizer = torch.optim.Adam([\n            {'params': self.model.parameters(), 'lr': self.configuration['learning_rate']},\n            {'params': self.feature_extractor.parameters(), 'lr': self.configuration['learning_rate']}],\n        )\n\n    def get_model_likelihood_mll(\n        self,\n        train_size: int,\n    ) -> Tuple[GPRegressionModel, gpytorch.likelihoods.GaussianLikelihood, gpytorch.mlls.ExactMarginalLogLikelihood]:\n        \"\"\"\n        Called when the surrogate is first initialized or restarted.\n\n        Args:\n            train_size: The size of the current training set.\n\n        Returns:\n            model, likelihood, mll - The GP model, the likelihood and\n                the marginal likelihood.\n        \"\"\"\n        train_x = torch.ones(train_size, train_size).to(self.dev)\n        train_y = torch.ones(train_size).to(self.dev)\n\n        likelihood = gpytorch.likelihoods.GaussianLikelihood().to(self.dev)\n        model = GPRegressionModel(train_x=train_x, train_y=train_y, likelihood=likelihood).to(self.dev)\n        mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model).to(self.dev)\n\n        return model, likelihood, mll\n\n    def train_pipeline(self, data: Dict[str, torch.Tensor], load_checkpoint: bool = False):\n        \"\"\"\n        Train the surrogate model.\n\n        Args:\n            data: A dictionary which has the training examples, training features,\n                training budgets and in the end the training curves.\n            load_checkpoint: A flag whether to load the state from a previous checkpoint,\n                or whether to start from scratch.\n        \"\"\"\n        self.iterations += 1\n        self.logger.debug(f'Starting iteration: {self.iterations}')\n        # whether the state has been changed. Basically, if a better loss was found during\n        # this optimization iteration then the state (weights) were changed.\n        weights_changed = False\n\n        if load_checkpoint:\n            try:\n                self.load_checkpoint()\n            except FileNotFoundError:\n                self.logger.error(f'No checkpoint file found at: {self.checkpoint_file}'\n                                  f'Training the GP from the beginning')\n\n        self.model.train()\n        self.likelihood.train()\n        self.feature_extractor.train()\n\n        self.optimizer = torch.optim.Adam([\n            {'params': self.model.parameters(), 'lr': self.configuration['learning_rate']},\n            {'params': self.feature_extractor.parameters(), 'lr': self.configuration['learning_rate']}],\n        )\n\n        X_train = data['X_train']\n        train_budgets = data['train_budgets']\n        train_curves = data['train_curves']\n        y_train = data['y_train']\n\n        initial_state = self.get_state()\n        training_errored = False\n\n        if self.restart:\n            self.restart_optimization()\n            nr_epochs = self.nr_epochs\n            # 2 cases where the statement below is hit.\n            # - We are switching from the full training phase in the beginning to refining.\n            # - We are restarting because our refining diverged\n            if self.initial_nr_points <= self.iterations:\n                self.restart = False\n        else:\n            nr_epochs = self.refine_epochs\n\n        # where the mean squared error will be stored\n        # when predicting on the train set\n        mse = 0.0\n\n        for epoch_nr in range(0, nr_epochs):\n\n            nr_examples_batch = X_train.size(dim=0)\n            # if only one example in the batch, skip the batch.\n            # Otherwise, the code will fail because of batchnorm\n            if nr_examples_batch == 1:\n                continue\n\n            # Zero backprop gradients\n            self.optimizer.zero_grad()\n\n            projected_x = self.feature_extractor(X_train, train_budgets, train_curves)\n            self.model.set_train_data(projected_x, y_train, strict=False)\n            output = self.model(projected_x)\n\n            try:\n                # Calc loss and backprop derivatives\n                loss = -self.mll(output, self.model.train_targets)\n                loss_value = loss.detach().to('cpu').item()\n                mse = gpytorch.metrics.mean_squared_error(output, self.model.train_targets)\n                self.logger.debug(\n                    f'Epoch {epoch_nr} - MSE {mse:.5f}, '\n                    f'Loss: {loss_value:.3f}, '\n                    f'lengthscale: {self.model.covar_module.base_kernel.lengthscale.item():.3f}, '\n                    f'noise: {self.model.likelihood.noise.item():.3f}, '\n                )\n                loss.backward()\n                self.optimizer.step()\n            except Exception as training_error:\n                self.logger.error(f'The following error happened while training: {training_error}')\n                # An error has happened, trigger the restart of the optimization and restart\n                # the model with default hyperparameters.\n                self.restart = True\n                training_errored = True\n                break\n\n        \"\"\"\n        # metric too high, time to restart, or we risk divergence\n        if mse > 0.15:\n            if not self.restart:\n                self.restart = True\n        \"\"\"\n        if training_errored:\n            self.save_checkpoint(initial_state)\n            self.load_checkpoint()\n\n    def predict_pipeline(\n        self,\n        train_data: Dict[str, torch.Tensor],\n        test_data: Dict[str, torch.Tensor],\n    ) -> Tuple[np.ndarray, np.ndarray]:\n        \"\"\"\n\n        Args:\n            train_data: A dictionary that has the training\n                examples, features, budgets and learning curves.\n            test_data: Same as for the training data, but it is\n                for the testing part and it does not feature labels.\n\n        Returns:\n            means, stds: The means of the predictions for the\n                testing points and the standard deviations.\n        \"\"\"\n        self.model.eval()\n        self.feature_extractor.eval()\n        self.likelihood.eval()\n\n        with torch.no_grad(): # gpytorch.settings.fast_pred_var():\n            projected_train_x = self.feature_extractor(\n                train_data['X_train'],\n                train_data['train_budgets'],\n                train_data['train_curves'],\n            )\n            self.model.set_train_data(inputs=projected_train_x, targets=train_data['y_train'], strict=False)\n            projected_test_x = self.feature_extractor(\n                test_data['X_test'],\n                test_data['test_budgets'],\n                test_data['test_curves'],\n            )\n            preds = self.likelihood(self.model(projected_test_x))\n\n        means = preds.mean.detach().to('cpu').numpy().reshape(-1, )\n        stds = preds.stddev.detach().to('cpu').numpy().reshape(-1, )\n\n        return means, stds\n\n    def load_checkpoint(self):\n        \"\"\"\n        Load the state from a previous checkpoint.\n        \"\"\"\n        checkpoint = torch.load(self.checkpoint_file)\n        self.model.load_state_dict(checkpoint['gp_state_dict'])\n        self.feature_extractor.load_state_dict(checkpoint['feature_extractor_state_dict'])\n        self.likelihood.load_state_dict(checkpoint['likelihood_state_dict'])\n\n    def save_checkpoint(self, state: Dict =None):\n        \"\"\"\n        Save the given state or the current state in a\n        checkpoint file.\n\n        Args:\n            state: The state to save, if none, it will\n            save the current state.\n        \"\"\"\n\n        if state is None:\n            torch.save(\n                self.get_state(),\n                self.checkpoint_file,\n            )\n        else:\n            torch.save(\n                state,\n                self.checkpoint_file,\n            )\n\n    def get_state(self) -> Dict[str, Dict]:\n        \"\"\"\n        Get the current state of the surrogate.\n\n        Returns:\n            current_state: A dictionary that represents\n                the current state of the surrogate model.\n        \"\"\"\n        current_state = {\n            'gp_state_dict': deepcopy(self.model.state_dict()),\n            'feature_extractor_state_dict': deepcopy(self.feature_extractor.state_dict()),\n            'likelihood_state_dict': deepcopy(self.likelihood.state_dict()),\n        }\n\n        return current_state"
  },
  {
    "path": "transopt/optimizer/model/get_model.py",
    "content": "from transopt.agent.registry import model_registry\n\n\n\ndef get_model(model_name, **kwargs):\n    \"\"\"Create the optimizer object.\"\"\"\n    model_class = model_registry.get(model_name)\n    config = kwargs\n\n    if model_class is not None:\n        model = model_class(config=config)\n    else:\n        print(f\"Refiner '{model_name}' not found in the registry.\")\n        raise NameError\n    return model"
  },
  {
    "path": "transopt/optimizer/model/gp.py",
    "content": "\nimport copy\nimport numpy as np\nfrom typing import Tuple, List\nfrom sklearn.preprocessing import StandardScaler\n\nfrom GPy.models import GPRegression\nfrom GPy.kern import RBF, Kern, Matern32\n\nfrom transopt.optimizer.model.model_base import  Model\nfrom transopt.optimizer.model.utils import is_pd, nearest_pd\nfrom transopt.agent.registry import model_registry\n\n@model_registry.register('GP')\nclass GP(Model):\n\n    def __init__(\n        self,\n        kernel: Kern = None,\n        noise_variance: float = 1.0,\n        normalize = False,\n        **options: dict\n    ):\n        \"\"\"Initialize the Method.\n\n        Args:\n            kernel: The type of kernel of the GP. Defaults to squared exponential\n                without automatic relevance determination.\n            noise_variance: The variance of the observation noise.\n            normalize: Train the model on normalized (`=True`) or original (`=False`)\n                data.\n            **options: Training arguments for `GPy.models.GPRegression`.\n        \"\"\"\n        super().__init__()\n        self._kernel = kernel if kernel is not None else None\n\n        self._noise_variance = np.array(noise_variance)\n        self._gpy_model = None\n\n\n        self._options = options\n\n    @property\n    def kernel(self):\n        \"\"\"Return GPy kernel in the normalized space.\"\"\"\n        return self._kernel\n\n    @property\n    def noise_variance(self):\n        \"\"\"Return noise variance.\"\"\"\n        return self._noise_variance\n\n    @kernel.setter\n    def kernel(self, kernel: Kern):\n        \"\"\"Assign a new kernel to the GP.\n\n        Args:\n            kernel: the new kernel to be assigned.\n\n        \"\"\"\n        self._kernel = kernel.copy()\n        if self._gpy_model:\n            # remove the old kernel from being a parameter of `gpy_model`\n            self._gpy_model.unlink_parameter(self._gpy_model.kern)\n            del self._gpy_model.kern\n            self._gpy_model.kern = kernel  # assign new kernel\n            # add the new kernel to the param class\n            self._gpy_model.link_parameter(kernel)\n            # re-cache the relevant quantities of the model\n            self._gpy_model.parameters_changed()\n\n\n    def meta_fit(\n        self,\n        source_X : List[np.ndarray],\n        source_Y : List[np.ndarray],\n        **kwargs,\n    ):\n        pass\n\n    def fit(\n        self,\n        X : np.ndarray,\n        Y : np.ndarray,\n        optimize: bool = False,\n    ):\n        self._X = np.copy(X)\n        self._y = np.copy(Y)\n        self._Y = np.copy(Y)\n\n        _X = np.copy(self._X)\n        _y = np.copy(self._y)\n\n\n        if self._gpy_model is None:\n            self._kernel = Matern32(input_dim=_X.shape[1])\n            self._gpy_model = GPRegression(\n                _X, _y, self._kernel, noise_var=self._noise_variance\n            )\n        else:\n            self._gpy_model.set_XY(_X, _y)\n\n        if optimize:\n            optimize_restarts_options = self._options.get(\n                \"optimize_restarts_options\", {}\n            )\n\n            kwargs = copy.deepcopy(optimize_restarts_options)\n\n            if \"verbose\" not in optimize_restarts_options:\n                kwargs[\"verbose\"] = False\n            kwargs[\"messages\"] = False\n            kwargs[\"optimizer\"]='lbfgs'\n            kwargs[\"max_iters\"] = 2000\n\n            try:\n                self._gpy_model.optimize_restarts(num_restarts=3, **kwargs)\n            except np.linalg.linalg.LinAlgError as e:\n                # break\n                print('Error: np.linalg.linalg.LinAlgError')\n\n\n        # self._kernel = self._gpy_model.kern.copy()\n        # self._noise_variance = self._gpy_model.likelihood.variance.values\n\n    def predict(\n        self, X: np.ndarray, return_full: bool = False, with_noise: bool = False\n    ) -> Tuple[np.ndarray, np.ndarray]:\n        mean, var = self._raw_predict(X, return_full, with_noise)\n\n        if self._X is None:\n            return mean, var\n\n        return mean, var\n\n    def _raw_predict(\n        self, X: np.ndarray, return_full: bool = False, with_noise: bool = False\n    ) -> Tuple[np.ndarray, np.ndarray]:\n        \"\"\"Predict functions distribution(s) for given test point(s) without taking into\n        account data normalization. If `self._normalize` is `False`, return the same as\n        `self.predict()`.\n\n        Same input/output as `self.predict()`.\n        \"\"\"\n        _X_test = X.copy()\n\n        if self.X is None:\n            mu = np.zeros((_X_test.shape[0], 1))\n            cov = self._kernel.K(_X_test)\n            var = np.diag(cov)[:, None]\n            return mu, cov if return_full else var\n\n        # ensure that no negative variance is predicted\n        mu, cov = self._gpy_model.predict(\n            _X_test, full_cov=return_full, include_likelihood=with_noise\n        )\n        if return_full:\n            if not is_pd(cov):\n                cov = nearest_pd(cov)\n        else:\n            cov = np.clip(cov, 1e-20, None)\n        return mu, cov\n\n    def predict_posterior_mean(self, X) -> np.ndarray:\n        r\"\"\"Perform model inference.\n\n        Predict the posterior mean of the latent distribution `f` for given test points.\n        Achieves the same as `self.predict(data)[0]` but is much faster.\n        Scales as $\\mathcal{O}(n)$, where $n$ is the number of training points. Useful\n        when the (co-)variance prediction is not needed. Computing the latter scales as\n        $\\mathcal{O}(n^2)$.\n\n        Args:\n            data: Input data to predict on. `shape = (n_points, n_features)`\n\n        Returns:\n            The mean prediction. `shape = (n_points, 1)`\n        \"\"\"\n        _x = X.copy()\n        if self._X is None:\n            return np.zeros(_x.shape)\n        _X = self._X.copy()\n\n        mu = self._kernel.K(_x, _X) @ self._gpy_model.posterior.woodbury_vector\n\n        return mu\n\n    def predict_posterior_covariance(self, x1, x2) -> np.ndarray:\n        \"\"\"Perform model inference.\n\n        Predict the posterior covariance between `(x1, x2)` of the latent distribution\n        `f`. In case `x1 == x2`, achieves the same as\n        `self.predict(x1, return_full=True)[1]`.\n\n        Args:\n            x1: Input data to predict on. `shape = (n_points_1, n_features)`\n            x2: Input data to predict on. `shape = (n_points_2, n_features)`\n\n        Returns:\n            Predicted covariance for every input. `shape = (n_points_1, n_points_2)`\n        \"\"\"\n        _X1 = x1.copy()\n        _X2 = x2.copy()\n\n        if self._X is None:\n            cov = self._kernel.K(_X1, _X2)\n            return cov\n\n        cov = self._gpy_model.posterior_covariance_between_points(\n            _X1, _X2, include_likelihood=False\n        )\n\n        return cov\n\n    def compute_kernel(self, x1, x2) -> np.ndarray:\n        \"\"\"Evaluate the kernel matrix for desired input points.\n\n        Wrapper around `self.kernel.K()` that takes care of normalization and allows\n        for prediction of empty GP.\n\n        Args:\n            x1: First input to be queried. `shape = (n_points_1, n_features)`\n            x2: Second input to be queried. `shape = (n_points_2, n_features)`\n\n        Returns:\n            Kernel values at `(x1, x2)`. `shape = (n_points_1, n_points_2)`\n        \"\"\"\n        _x1, _x2 = np.copy(x1), np.copy(x2)\n\n        return self._kernel.K(_x1, _x2)\n\n    def compute_kernel_diagonal(self, X) -> np.ndarray:\n        \"\"\"Evaluate diagonal of kernel matrix for desired input points.\n\n        Much faster than `compute_kernel()` in case only the diagonal is needed.\n        Wrapper around `self.kernel.Kdiag()` that takes care of normalization and\n        allows for prediction of empty GP.\n\n        Args:\n            data: Input to be queried. `shape = (n_points, n_features)`\n\n        Returns:\n            Kernel diagonal. `shape = (n_points, 1)`\n        \"\"\"\n        _x = np.copy(X)\n\n        return self._kernel.Kdiag(_x).reshape(-1, 1)\n\n    def sample(\n        self, X, size: int = 1, with_noise: bool = False\n    ) -> np.ndarray:\n        \"\"\"Perform model inference.\n\n        Sample functions from the posterior distribution for the given test points.\n\n        Args:\n            data: Input data to predict on. `shape = (n_points, n_features)`\n            size: Number of functions to sample.\n            with_noise: If `False`, the latent function `f` is considered. If `True`,\n                the observed function `y` that includes the noise variance is\n                considered.\n\n        Returns:\n            Sampled function value for every input. `shape = (n_points, size)`\n        \"\"\"\n        mean, cov = self.predict(X, return_full=True, with_noise=with_noise)\n        mean = mean.flatten()\n        sample = np.random.multivariate_normal(mean, cov, size).T\n        return sample\n    \n    def get_fmin(self):\n\n        \n        return np.min(self._y)\n         \n"
  },
  {
    "path": "transopt/optimizer/model/hebo.py",
    "content": "# Copyright (C) 2020. Huawei Technologies Co., Ltd. All rights reserved.\n\n# This program is free software; you can redistribute it and/or modify it under\n# the terms of the MIT license.\n\n# This program is distributed in the hope that it will be useful, but WITHOUT ANY\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A\n# PARTICULAR PURPOSE. See the MIT License for more details.\n\n\nimport sys\nfrom typing import Optional\n\nimport numpy  as np\nimport pandas as pd\nimport torch\nfrom copy import deepcopy\nfrom torch.quasirandom import SobolEngine\nfrom sklearn.preprocessing import power_transform\n\n\nfrom external.hebo.design_space.design_space import DesignSpace\nfrom external.hebo.models.model_factory import get_model\nfrom external.hebo.acquisitions.acq import MACE, Mean, Sigma\nfrom external.hebo.acq_optimizers.evolution_optimizer import EvolutionOpt\n\nfrom .abstract_optimizer import AbstractOptimizer\n\ntorch.set_num_threads(min(1, torch.get_num_threads()))\n\n\n\n\n\n\n\n\n\n\n\nclass HEBO(AbstractOptimizer):\n    support_parallel_opt  = True\n    support_combinatorial = True\n    support_contextual    = True\n    def __init__(self, space, model_name = 'gpy', rand_sample = None, acq_cls = MACE, es = 'nsga2', model_config = None,\n                 scramble_seed: Optional[int] = None ):\n        \"\"\"\n        model_name  : surrogate model to be used\n        rand_sample : iterations to perform random sampling\n        scramble_seed : seed used for the sobol sampling of the first initial points\n        \"\"\"\n        super().__init__(space)\n        self.space       = space\n        self.es          = es\n        self.X           = pd.DataFrame(columns = self.space.para_names)\n        self.y           = np.zeros((0, 1))\n        self.model_name  = model_name\n        self.rand_sample = 1 + self.space.num_paras if rand_sample is None else max(2, rand_sample)\n        self.scramble_seed = scramble_seed\n        self.sobol       = SobolEngine(self.space.num_paras, scramble = True, seed = scramble_seed)\n        self.acq_cls     = acq_cls\n        self._model_config = model_config\n\n    def quasi_sample(self, n, fix_input = None): \n        samp    = self.sobol.draw(n)\n        samp    = samp * (self.space.opt_ub - self.space.opt_lb) + self.space.opt_lb\n        x       = samp[:, :self.space.num_numeric]\n        xe      = samp[:, self.space.num_numeric:]\n        for i, n in enumerate(self.space.numeric_names):\n            if self.space.paras[n].is_discrete_after_transform:\n                x[:, i] = x[:, i].round()\n        df_samp = self.space.inverse_transform(x, xe)\n        if fix_input is not None:\n            for k, v in fix_input.items():\n                df_samp[k] = v\n        return df_samp\n\n    @property\n    def model_config(self):\n        if self._model_config is None:\n            if self.model_name == 'gp':\n                cfg = {\n                        'lr'           : 0.01,\n                        'num_epochs'   : 100,\n                        'verbose'      : False,\n                        'noise_lb'     : 8e-4, \n                        'pred_likeli'  : False\n                        }\n            elif self.model_name == 'gpy':\n                cfg = {\n                        'verbose' : False,\n                        'warp'    : True,\n                        'space'   : self.space\n                        }\n            elif self.model_name == 'gpy_mlp':\n                cfg = {\n                        'verbose' : False\n                        }\n            elif self.model_name == 'rf':\n                cfg =  {\n                        'n_estimators' : 20\n                        }\n            else:\n                cfg = {}\n        else:\n            cfg = deepcopy(self._model_config)\n\n        if self.space.num_categorical > 0:\n            cfg['num_uniqs'] = [len(self.space.paras[name].categories) for name in self.space.enum_names]\n        return cfg\n\n    def get_best_id(self, fix_input : dict = None) -> int:\n        if fix_input is None:\n            return np.argmin(self.y.reshape(-1))\n        X = self.X.copy()\n        y = self.y.copy()\n        for k, v in fix_input.items():\n            if X[k].dtype != 'float':\n                crit = (X[k] != v).values\n            else:\n                crit = ((X[k] - v).abs() > np.finfo(float).eps).values\n            y[crit]  = np.inf\n        if np.isfinite(y).any():\n            return np.argmin(y.reshape(-1))\n        else:\n            return np.argmin(self.y.reshape(-1))\n\n    def suggest(self, n_suggestions=1, fix_input = None):\n        if self.acq_cls != MACE and n_suggestions != 1:\n            raise RuntimeError('Parallel optimization is supported only for MACE acquisition')\n        if self.X.shape[0] < self.rand_sample:\n            sample = self.quasi_sample(n_suggestions, fix_input)\n            return sample\n        else:\n            X, Xe = self.space.transform(self.X)\n            try:\n                if self.y.min() <= 0:\n                    y = torch.FloatTensor(power_transform(self.y / self.y.std(), method = 'yeo-johnson'))\n                else:\n                    y = torch.FloatTensor(power_transform(self.y / self.y.std(), method = 'box-cox'))\n                    if y.std() < 0.5:\n                        y = torch.FloatTensor(power_transform(self.y / self.y.std(), method = 'yeo-johnson'))\n                if y.std() < 0.5:\n                    raise RuntimeError('Power transformation failed')\n                model = get_model(self.model_name, self.space.num_numeric, self.space.num_categorical, 1, **self.model_config)\n                model.fit(X, Xe, y)\n            except:\n                y     = torch.FloatTensor(self.y).clone()\n                model = get_model(self.model_name, self.space.num_numeric, self.space.num_categorical, 1, **self.model_config)\n                model.fit(X, Xe, y)\n\n            best_id = self.get_best_id(fix_input)\n            best_x  = self.X.iloc[[best_id]]\n            best_y  = y.min()\n            py_best, ps2_best = model.predict(*self.space.transform(best_x))\n            py_best = py_best.detach().numpy().squeeze()\n            ps_best = ps2_best.sqrt().detach().numpy().squeeze()\n\n            iter  = max(1, self.X.shape[0] // n_suggestions)\n            upsi  = 0.5\n            delta = 0.01\n            # kappa = np.sqrt(upsi * 2 * np.log(iter **  (2.0 + self.X.shape[1] / 2.0) * 3 * np.pi**2 / (3 * delta)))\n            kappa = np.sqrt(upsi * 2 * ((2.0 + self.X.shape[1] / 2.0) * np.log(iter) + np.log(3 * np.pi**2 / (3 * delta))))\n\n            acq = self.acq_cls(model, best_y = py_best, kappa = kappa) # LCB < py_best\n            mu  = Mean(model)\n            sig = Sigma(model, linear_a = -1.)\n            opt = EvolutionOpt(self.space, acq, pop = 100, iters = 100, verbose = False, es=self.es)\n            rec = opt.optimize(initial_suggest = best_x, fix_input = fix_input).drop_duplicates()\n            rec = rec[self.check_unique(rec)]\n\n            cnt = 0\n            while rec.shape[0] < n_suggestions:\n                rand_rec = self.quasi_sample(n_suggestions - rec.shape[0], fix_input)\n                rand_rec = rand_rec[self.check_unique(rand_rec)]\n                rec      = pd.concat([rec, rand_rec], axis = 0, ignore_index = True)\n                cnt +=  1\n                if cnt > 3:\n                    # sometimes the design space is so small that duplicated sampling is unavoidable\n                    break \n            if rec.shape[0] < n_suggestions:\n                rand_rec = self.quasi_sample(n_suggestions - rec.shape[0], fix_input)\n                rec      = pd.concat([rec, rand_rec], axsi = 0, ignore_index = True)\n\n            select_id = np.random.choice(rec.shape[0], n_suggestions, replace = False).tolist()\n            x_guess   = []\n            with torch.no_grad():\n                py_all       = mu(*self.space.transform(rec)).squeeze().numpy()\n                ps_all       = -1 * sig(*self.space.transform(rec)).squeeze().numpy()\n                best_pred_id = np.argmin(py_all)\n                best_unce_id = np.argmax(ps_all)\n                if best_unce_id not in select_id and n_suggestions > 2:\n                    select_id[0]= best_unce_id\n                if best_pred_id not in select_id and n_suggestions > 2:\n                    select_id[1]= best_pred_id\n                rec_selected = rec.iloc[select_id].copy()\n            return rec_selected\n\n    def check_unique(self, rec : pd.DataFrame) -> [bool]:\n        return (~pd.concat([self.X, rec], axis = 0).duplicated().tail(rec.shape[0]).values).tolist()\n\n    def observe(self, X, y):\n        \"\"\"Feed an observation back.\n\n        Parameters\n        ----------\n        X : pandas DataFrame\n            Places where the objective function has already been evaluated.\n            Each suggestion is a dictionary where each key corresponds to a\n            parameter being optimized.\n        y : array-like, shape (n,1)\n            Corresponding values where objective has been evaluated\n        \"\"\"\n        valid_id = np.where(np.isfinite(y.reshape(-1)))[0].tolist()\n        XX       = X.iloc[valid_id]\n        yy       = y[valid_id].reshape(-1, 1)\n        self.X   = pd.concat([self.X, XX], axis = 0, ignore_index = True)\n        self.y   = np.vstack([self.y, yy])\n\n    @property\n    def best_x(self)->pd.DataFrame:\n        if self.X.shape[0] == 0:\n            raise RuntimeError('No data has been observed!')\n        else:\n            return self.X.iloc[[self.y.argmin()]]\n\n    @property\n    def best_y(self)->float:\n        if self.X.shape[0] == 0:\n            raise RuntimeError('No data has been observed!')\n        else:\n            return self.y.min()\n"
  },
  {
    "path": "transopt/optimizer/model/mhgp.py",
    "content": "# Copyright (c) 2021 Robert Bosch GmbH\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published\n# by the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nimport copy\nimport numpy as np\nfrom typing import Dict, Hashable, Union, Sequence, Tuple, List\n\nfrom GPy.kern import RBF\nfrom GPy.kern import Kern, RBF\nfrom transopt.optimizer.model.gp import GP\nfrom transopt.optimizer.model.model_base import Model\nfrom transopt.agent.registry import model_registry\n\n@model_registry.register(\"MHGP\")\nclass MHGP(Model):\n    \"\"\"Stack of Gaussian processes.\n\n    Transfer Learning model based on [Golovin et al: Google Vizier: A Service for\n    Black-Box Optimization](https://dl.acm.org/doi/abs/10.1145/3097983.3098043).\n    Given a list of source data sets, the\n    transfer to the target data set is done by training a separate GP for each data set\n    whose prior mean function is the posterior mean function of the previous GP in the\n    stack.\n    \"\"\"\n\n    def __init__(self,         \n        kernel: Kern = None,\n        noise_variance: float = 1.0,\n        normalize: bool = True,\n        **options: dict):\n        \"\"\"Initialize the Method.\n\n        Args:\n            n_features: Number of input parameters of the data.\n            within_model_normalize: Normalize each GP internally. Helpful for\n                numerical stability.\n        \"\"\"\n        super().__init__()\n\n        self._normalize = normalize\n        self._kernel = kernel\n        self._noise_variance = noise_variance\n        self.n_samples = 0\n\n        self.source_gps = []\n\n        # GP on difference between target data and last source data set\n        self.target_gp = None\n\n    def _compute_residuals(self, X: np.ndarray, Y: np.ndarray) -> np.ndarray:\n        \"\"\"Determine the difference between given y-values and the sum of predicted\n        values from the models in 'source_gps'.\n\n        Args:\n            data: Observation (input and target) data.\n                Input data: ndarray, `shape = (n_points, n_features)`\n                Target data: ndarray, `shape = (n_points, 1)`\n\n        Returns:\n            Difference between observed values and sum of predicted values\n            from `source_gps`. `shape = (n_points, 1)`\n        \"\"\"\n        if self.n_features != X.shape[1]:\n            raise ValueError(\"Number of features in model and input data mismatch.\")\n\n        if not self.source_gps:\n            return Y\n\n        predicted_y = self.predict_posterior_mean(\n            X, idx=len(self.source_gps) - 1\n        )\n\n        residuals = Y - predicted_y\n\n        return residuals\n\n    def _update_meta_data(self, *gps: GP):\n        \"\"\"Cache the meta data after meta training.\"\"\"\n        for gp in gps:\n            self.source_gps.append(gp)\n\n    def _meta_fit_single_gp(\n        self,\n        X : np.ndarray,\n        Y : np.ndarray,\n        optimize: bool,\n    ) -> GP:\n        \"\"\"Train a new source GP on `data`.\n\n        Args:\n            data: The source dataset.\n            optimize: Switch to run hyperparameter optimization.\n\n        Returns:\n            The newly trained GP.\n        \"\"\"\n        self.n_features = X.shape[1]\n        \n        residuals = self._compute_residuals(X, Y)\n        \n        kernel = RBF(self.n_features, ARD=True)\n        new_gp = GP(\n            kernel, noise_variance=self._noise_variance\n        )\n        new_gp.fit(\n            X = X,\n            Y = residuals,\n            optimize = optimize,\n        )\n        return new_gp\n\n    def meta_fit(\n        self,\n        source_X : List[np.ndarray],\n        source_Y : List[np.ndarray],\n        optimize: Union[bool, Sequence[bool]] = True,\n    ):\n        \"\"\"Train the source GPs on the given source data.\n\n        Args:\n            source_datasets: Dictionary containing the source datasets. The stack of GPs\n                are trained on the residuals between two consecutive data sets in this\n                list.\n            optimize: Switch to run hyperparameter optimization.\n        \"\"\"\n        assert isinstance(optimize, bool) or isinstance(optimize, list)\n        if isinstance(optimize, list):\n            assert len(source_X) == len(optimize)\n        optimize_flag = copy.copy(optimize)\n\n        if isinstance(optimize_flag, bool):\n            optimize_flag = [optimize_flag] * len(source_X)\n\n        for i in range(len(source_X)):\n            new_gp = self._meta_fit_single_gp(\n                source_X[i],\n                source_Y[i],\n                optimize=optimize_flag[i],\n            )\n            self._update_meta_data(new_gp)\n\n\n\n    def fit(\n        self,\n        X: np.ndarray,\n        Y: np.ndarray,\n        optimize: bool = False,\n    ):\n        if not self.source_gps:\n            raise ValueError(\n                \"Error: source gps are not trained. Forgot to call `meta_fit`.\"\n            )\n\n        self._X = copy.deepcopy(X)\n        self._y = copy.deepcopy(Y)\n        \n        self.n_samples, n_features = self._X.shape\n        if self.n_features != n_features:\n            raise ValueError(\"Number of features in model and input data mismatch.\")\n        \n        if self.target_gp is None:\n            self.target_gp = GP(\n            RBF(self.n_features, ARD=True),\n            noise_variance=0.1,\n        )\n\n        residuals = self._compute_residuals(X, Y)\n\n        self.target_gp.fit(X, residuals, optimize)\n\n    def predict(\n        self, X: np.ndarray, return_full: bool = False, with_noise: bool = False\n    ) -> Tuple[np.ndarray, np.ndarray]:\n        if not self.source_gps:\n            raise ValueError(\n                \"Error: source gps are not trained. Forgot to call `meta_fit`.\"\n            )\n\n        # returned mean: sum of means of the predictions of all source and target GPs\n        mu = self.predict_posterior_mean(X)\n\n        # returned variance is the variance of target GP\n        _, var = self.target_gp.predict(\n            X, return_full=return_full, with_noise=with_noise\n        )\n\n        return mu, var\n\n    def predict_posterior_mean(self, X: np.ndarray, idx: int = None) -> np.ndarray:\n        \"\"\"Predict the mean function for given test point(s).\n\n        For `idx=None` returns the same as `self.predict(data)[0]` but avoids the\n        overhead coming from predicting the variance. If `idx` is specified, returns\n        the sum of all the means up to the `idx`-th GP. Useful for inspecting the inner\n        state of the stack.\n\n        Args:\n            data: Input data to predict on.\n                Data is provided as ndarray with shape = (n_points, n_features).\n            idx: Integer of the GP in the stack. Counting starts from the bottom at\n                zero. If `None`, the mean prediction of the entire stack is returned.\n\n        Returns:\n            Predicted mean for every input. `shape = (n_points, 1)`\n        \"\"\"\n\n        all_gps = self.source_gps + [self.target_gp]\n\n        if idx is None:  # if None, the target GP is considered\n            idx = len(all_gps) - 1\n\n        mu = np.zeros((X.shape[0], 1))\n        # returned mean is a sum of means of the predictions of all GPs below idx\n        for model in all_gps[: idx + 1]:\n            mu += model.predict_posterior_mean(X)\n\n        return mu\n\n    def predict_posterior_covariance(self, x1: np.ndarray, x2: np.ndarray) -> np.ndarray:\n        \"\"\"Posterior covariance between two inputs.\n\n        Args:\n            x1: First input to be queried. `shape = (n_points_1, n_features)`\n            x2: Second input to be queried. `shape = (n_points_2, n_features)`\n\n        Returns:\n            Posterior covariance at `(x1, x2)`. `shape = (n_points_1, n_points_2)`\n        \"\"\"\n        return self.target_gp.predict_posterior_covariance(x1, x2)\n    \n    def get_fmin(self):\n\n        return np.min(self._y)"
  },
  {
    "path": "transopt/optimizer/model/mlp.py",
    "content": "import os\nfrom typing import List, Tuple\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torchvision.datasets.utils as dataset_utils\nfrom PIL import Image\nfrom sklearn.model_selection import KFold, train_test_split\nfrom torch.autograd import grad\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom torchvision import datasets, transforms\n\nfrom transopt.agent.registry import model_registry\nfrom transopt.optimizer.model.model_base import Model\n\n\ndef compute_irm_penalty(losses, dummy):\n    g1 = grad(losses[0::2].mean(), dummy, create_graph=True)[0]\n    g2 = grad(losses[1::2].mean(), dummy, create_graph=True)[0]\n    return (g1 * g2).sum()\n\nclass Net(nn.Module):\n    def __init__(self, input_dim, dropout_rate=0.3):\n        super(Net, self).__init__()\n        self.fc1 = nn.Linear(input_dim, 64)\n        self.fc2 = nn.Linear(64, 32)\n        self.fc3 = nn.Linear(32, 16)\n        self.fc4 = nn.Linear(16, 1)\n        self.dropout = nn.Dropout(dropout_rate)\n\n    def forward(self, x):\n        x = F.relu(self.fc1(x))\n        x = self.dropout(x)\n        x = F.relu(self.fc2(x))\n        x = self.dropout(x)\n        x = F.relu(self.fc3(x))\n        logits = self.fc4(x)\n        return logits\n\n\n    \n@model_registry.register('MLP')\nclass MLP(Model):\n    def __init__(self, config):\n        super().__init__()\n        self._model = None\n        use_cuda = torch.cuda.is_available()\n        self.device = torch.device(\"cuda\" if use_cuda else \"cpu\")\n        self._batch_size = 16\n        self._dropout_rate = 0.3\n        self._best_model_state = None\n        self._best_val_loss = float('inf')\n    \n    def meta_fit(\n        self,\n        source_X : List[np.ndarray],\n        source_Y : List[np.ndarray],\n        **kwargs,\n    ):\n        pass\n\n    def fit(\n        self,\n        X : np.ndarray,\n        Y : np.ndarray,\n        epochs : int = 50,\n        optimize: bool = False,\n    ):\n        self._X = np.copy(X)\n        self._y = np.copy(Y)\n        self._Y = np.copy(Y)\n\n        _X = np.copy(self._X)\n        _y = np.copy(self._y)\n        \n        X_tensor = torch.tensor(_X, dtype=torch.float32)\n        y_tensor = torch.tensor(_y, dtype=torch.float32).view(-1, 1)\n        \n        X_train, X_val, y_train, y_val = train_test_split(X_tensor, y_tensor, test_size=0.1, random_state=42)\n\n        X_train_tensor = torch.tensor(X_train, dtype=torch.float32)\n        y_train_tensor = torch.tensor(y_train, dtype=torch.float32)\n        X_val_tensor = torch.tensor(X_val, dtype=torch.float32)\n        y_val_tensor = torch.tensor(y_val, dtype=torch.float32)\n\n        train_dataset = TensorDataset(X_train_tensor, y_train_tensor)\n        val_dataset = TensorDataset(X_val_tensor, y_val_tensor)\n\n        train_loader = DataLoader(train_dataset, batch_size=self._batch_size, shuffle=True)\n        val_loader = DataLoader(val_dataset, batch_size=self._batch_size, shuffle=False)\n\n        \n        patience = 5\n        patience_counter = 0\n\n        train_losses = []\n        val_losses = []\n\n        for epoch in range(epochs):\n            if self._model is None or patience_counter >= patience:\n                self._model = Net(input_dim=X_train.shape[1], dropout_rate=self._dropout_rate).to(self.device)\n                self._optimizer = optim.Adam(self._model.parameters(), lr=0.0001, weight_decay=1e-5)\n                patience_counter = 0\n\n            self._model.train()\n            train_loss = 0\n            for data, target in train_loader:\n                data, target = data.to(self.device), target.to(self.device)\n                self._optimizer.zero_grad()\n                output = self._model(data)\n                loss = F.mse_loss(output, target)\n                loss.backward()\n                self._optimizer.step()\n                train_loss += loss.item()\n\n            train_loss /= len(train_loader)\n            train_losses.append(loss.item())\n\n            self._model.eval()\n            val_loss = 0\n            with torch.no_grad():\n                for data, target in val_loader:\n                    data, target = data.to(self.device), target.to(self.device)\n                    output = self._model(data)\n                    loss = F.mse_loss(output, target)\n                    val_loss += loss.item()\n\n            val_loss /= len(val_loader)\n            val_losses.append(loss.item())\n            \n            print(f'Epoch {epoch+1}, Train Loss: {train_loss:.4f}, Validation Loss: {val_loss:.4f}')\n\n            if val_loss < self._best_val_loss:\n                self._best_val_loss = val_loss\n                self._best_model_state = self._model.state_dict()\n                patience_counter = 0\n            else:\n                patience_counter += 1\n        \n        if self._best_model_state:\n            self._model.load_state_dict(self._best_model_state)\n        self.save_plots(train_losses, val_losses, X_val_tensor, y_val_tensor, 'output_plots', iter_num=_X.shape[0])\n\n\n        \n    def predict(\n        self, X: np.ndarray, return_full: bool = False, with_noise: bool = False\n    ) -> Tuple[np.ndarray, np.ndarray]:\n        \n        data = torch.tensor(X, dtype=torch.float32).to(self.device)\n        self._model.eval()\n        with torch.no_grad():\n            output = self._model(data)\n        output = output.to('cpu')\n        output = output.numpy()\n        variance = np.zeros(shape=(output.shape[0], 1))\n        return output, variance\n        \n    def get_fmin(self):\n        \n        return np.min(self._y)\n    \n    \n    def save_plots(self, train_losses, val_losses, X_val, y_val, output_dir, iter_num):\n        if not os.path.exists(output_dir):\n            os.makedirs(output_dir)\n\n        # 保存损失曲线图\n        plt.figure(figsize=(10, 5))\n        plt.plot(train_losses, label='Train Loss')\n        plt.plot(val_losses, label='Validation Loss')\n        plt.xlabel('Epoch')\n        plt.ylabel('Loss')\n        plt.legend()\n        plt.title('Train and Validation Loss Over Epochs')\n        plt.savefig(os.path.join(output_dir, f'loss_plot_{iter_num}.png'))\n        plt.close()\n\n        # 保存预测值与真实值的对比图\n        self._model.eval()\n        with torch.no_grad():\n            predictions = self._model(X_val.to(self.device)).cpu().numpy()\n        plt.figure(figsize=(10, 5))\n        plt.plot(range(len(y_val)), y_val.cpu().numpy(), label='True Values')\n        plt.plot(range(len(predictions)), predictions, label='Predictions')\n        plt.xlabel('Samples')\n        plt.ylabel('Values')\n        plt.legend()\n        plt.title('Predictions vs True Values')\n        plt.savefig(os.path.join(output_dir, f'predictions_vs_true_plot_{iter_num}.png'))\n        plt.close()"
  },
  {
    "path": "transopt/optimizer/model/model_base.py",
    "content": "from abc import abstractmethod, ABC\nfrom typing import Dict, Hashable\nimport numpy as np\n\n\nclass Model(ABC):\n    \"\"\"Abstract model class.\"\"\"\n\n    def __init__(self):\n        \"\"\"Initializes base model.\"\"\"\n        self._X = None\n        self._Y = None\n\n    @property\n    def X(self) -> np.ndarray:\n        \"\"\"Return input data.\"\"\"\n        return self._X\n\n    @property\n    def y(self) -> np.ndarray:\n        \"\"\"Return target data.\"\"\"\n        return self._Y\n\n    @abstractmethod\n    def meta_fit(self, metadata, **kwargs):\n        \"\"\"Train model on historical data.\n\n        Parameters:\n        -----------\n        metadata\n            Dictionary containing a numerical representation of the meta-data that can\n            be used to meta-train a model for each task.\n        \"\"\"\n        pass\n\n    @abstractmethod\n    def fit(self, X, Y, **kwargs):\n        \"\"\"Adjust model parameter to the observation on the new dataset.\n\n        Parameters:\n        -----------\n        data: TaskData\n            Observation data.\n        \"\"\"\n        pass\n\n    @abstractmethod\n    def predict(self, X) -> (np.ndarray, np.ndarray):\n        \"\"\"Predict outcomes for a given array of input values.\n\n        Parameters:\n        -----------\n        data: InputData\n            Input data to predict on.\n\n        Returns\n        -------\n        mu: shape = (n_points, 1)\n            Predicted mean for every input\n        cov: shape = (n_points, n_points) or (n_points, 1)\n            Predicted (co-)variance for every input\n        \"\"\"\n        pass\n    "
  },
  {
    "path": "transopt/optimizer/model/moeadego.py",
    "content": "import numpy as np\nfrom GPy.kern import RBF, Kern\nfrom sklearn.preprocessing import StandardScaler\n\nfrom transopt.agent.registry import model_registry\nfrom transopt.optimizer.model.gp import GP\nfrom transopt.optimizer.model.model_base import Model\nfrom transopt.utils.weights import init_weight, tchebycheff\n\n\n@model_registry.register(\"MOEAD-EGO\")\nclass MoeadEGO(Model):\n    def __init__(\n        self,\n        num_objective: int,\n        name=\"MoeadEGO\",\n        num_weights=10,\n        seed=0,\n        normalize: bool = True,\n        **options: dict\n    ):\n        super().__init__()\n        self.name = name\n        self.num_weights = num_weights\n        self.num_objective = num_objective\n        self.normalize = normalize\n        self.seed = seed\n        self.weights = init_weight(self.num_objective, self.num_weights)\n        self.models = []\n        self._x_normalizer = StandardScaler() if normalize else None\n        self._y_normalizer = StandardScaler() if normalize else None\n        self._options = options\n        self._initialize_weights()\n\n    def fit(self, X, Y):\n        self._X = np.copy(X)\n        self._Y = np.copy(Y)\n        if self.normalize:\n            X = self._x_normalizer.fit_transform(X)\n            Y = self._y_normalizer.fit_transform(Y)\n        self._update_model(X, Y)\n\n    def predict(self, X, full_cov=False):\n        return self._make_prediction(X, full_cov)\n\n    def _create_model(self, X, Y):\n        self.models = []\n        ideal_point = np.min(Y.T, axis=0)\n        for i, weight in enumerate(self.weights):\n            kernel = RBF(input_dim=X.shape[1])\n            Y_weighted = tchebycheff(Y.T, weight, ideal=ideal_point)\n            model = GP(\n            kernel, noise_variance=self._noise_variance\n            )\n            model.fit(\n                X = X,\n                Y = Y_weighted,\n                optimize = True,\n            )\n            model[\".*Gaussian_noise.variance\"].constrain_fixed(1.0e-4)\n            model[\".*rbf.variance\"].constrain_fixed(1.0)\n            self.models.append(model)\n\n    def _update_model(self, X, Y):\n        if not self.models:\n            self._create_model(X, Y)\n        else:\n            ideal_point = np.min(Y.T, axis=0)\n            for i, model in enumerate(self.models):\n                Y_weighted = tchebycheff(Y.T, self.weights[i], ideal=ideal_point)\n                model.set_XY(X, Y_weighted[:, np.newaxis])\n\n        try:\n            for model in self.models:\n                model.optimize_restarts(\n                    num_restarts=1, verbose=self.verbose, robust=True\n                )\n        except np.linalg.linalg.LinAlgError as e:\n            print(\"Error during model optimization: \", e)\n\n    def _make_prediction(self, X, full_cov=False):\n        pred_mean = np.zeros((X.shape[0], 0))\n        pred_var = (\n            np.zeros((X.shape[0], 0))\n            if not full_cov\n            else np.zeros((0, X.shape[0], X.shape[0]))\n        )\n\n        for model in self.model_list:\n            mean, var = model.predict(X, full_cov=full_cov)\n            pred_mean = np.append(pred_mean, mean, axis=1)\n            if full_cov:\n                pred_var = np.append(pred_var, [var], axis=0)\n            else:\n                pred_var = np.append(pred_var, var, axis=1)\n        return pred_mean, pred_var\n\n    def _make_prediction_by_id(self, X, idx, full_cov=False):\n        pred_mean = np.zeros((X.shape[0], 0))\n        if full_cov:\n            pred_var = np.zeros((0, X.shape[0], X.shape[0]))\n        else:\n            pred_var = np.zeros((X.shape[0], 0))\n            mean, var = self.model_list[idx].predict(X, full_cov=full_cov)\n            pred_mean = np.append(pred_mean, mean, axis=1)\n            if full_cov:\n                pred_var = np.append(pred_var, [var], axis=0)\n            else:\n                pred_var = np.append(pred_var, var, axis=1)\n        return pred_mean, pred_var\n"
  },
  {
    "path": "transopt/optimizer/model/mtgp.py",
    "content": "# Copyright (c) 2021 Robert Bosch GmbH\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published\n# by the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nimport copy\nfrom typing import Dict, List, Tuple\n\nimport numpy as np\nfrom GPy.kern import Kern, RBF\nfrom GPy.models import GPCoregionalizedRegression\nfrom GPy.util.multioutput import ICM\n\nfrom transopt.optimizer.model.gp import GP\nfrom transopt.optimizer.model.utils import is_pd, nearest_pd\nfrom transopt.agent.registry import model_registry\n\n@model_registry.register(\"MTGP\")\nclass MTGP(GP):\n    r\"\"\"Multi-Task-Single-k GP, a GP-based transfer-learning algorithm.\n\n    Multi-Task-Single-kGP models the source and target data on an equal footing with no\n    explicit hierarchy. Correlations within tasks are assumed to be different than\n    those across tasks. Also known as coregionalized regression model,\n    Multi-Task-Single-k GP models the data with a kernel of the form\n\n    $$\n        \\begin{bmatrix}\n        k((x, s), (x', s)) &  k((x, s), (x', t)) \\\\\n        k((x, t), (x', s)) &  k((x, t), (x', t))\n        \\end{bmatrix}\n        =\n        \\begin{bmatrix}\n        W_{ss} & W_{st} \\\\\n        W_{st} & W_{tt}\n        \\end{bmatrix}\n        k(x, x'),\n    $$\n\n    where $\\mathbf{W}$ is a positive semi-definite matrix also known as\n    coregionalization matrix.\n\n    Multi-Task-Single-k GP is a powerful but computationally expensive Method since (i)\n    it scales cubically with the total number of data points and (ii) the number of\n    hyperparameters scales quadratically with the number of tasks.\n    \"\"\"\n\n    def __init__(\n        self,\n        kernel: Kern = None,\n        noise_variance: float = 1.0,\n        normalize: bool = False,\n        **options: dict,\n    ):\n        super().__init__(kernel, noise_variance, normalize, **options)\n        self._normalize = normalize\n        self._kernel = kernel\n        self._multikernel = None\n        self._gpy_model = None\n        self._noise_variance = []\n        self.n_sources = None\n        self.n_features = None\n\n        self._metadata_x = []\n        self._metadata_y = []\n\n        self._options = options\n\n    def meta_fit(\n        self,\n        source_X : List[np.ndarray],\n        source_Y : List[np.ndarray],\n        **kwargs,\n    ):\n        data_X = copy.deepcopy(source_X)\n        data_Y = copy.deepcopy(source_Y)\n        self.n_sources = len(data_X)\n\n        # create list of input/observed values from source data\n        for i in range(self.n_sources):\n            self._metadata_x = self._metadata_x + data_X\n            self._metadata_y = self._metadata_y + data_Y\n        self.n_features = self._metadata_x[0].shape[-1]\n\n    def fit(\n        self,\n        X : np.ndarray,\n        Y : np.ndarray,\n        optimize: bool = False,\n    ):\n        if not self._metadata_x:\n            raise ValueError(\n                \"Error: source data not available. Forgot to call `meta_fit`.\"\n            )\n\n        self._X = np.copy(X)\n        self._y = np.copy(Y)\n\n        # add target data to the list of input/observed values\n        x_list = copy.deepcopy(self._metadata_x)\n        y_list = copy.deepcopy(self._metadata_y)\n        x_list.append(X)\n        y_list.append(Y)\n\n        if self._normalize:\n            # add source order to data lists\n            for i in range(len(x_list)):\n                x_list[i] = np.hstack(\n                    [x_list[i], np.zeros((x_list[i].shape[0], 1)) + i]\n                )\n                y_list[i] = np.hstack(\n                    [y_list[i], np.zeros((y_list[i].shape[0], 1)) + i]\n                )\n            # merge all data into one array, normalize data\n            x_all = np.vstack(x_list)\n            x_all[:, :-1] = self._x_normalizer.fit_transform(x_all[:, :-1])\n            y_all = np.vstack(y_list)\n            y_all[:, :-1] = self._y_normalizer.fit_transform(y_all[:, :-1])\n            # transform data back to original list of arrays\n            for i in range(len(x_list)):\n                x_list[i] = x_all[np.where(x_all[:, -1] == i)][:, :-1]\n                y_list[i] = y_all[np.where(y_all[:, -1] == i)][:, :-1]\n\n        # define multiple output kernel\n        if self._kernel is None:\n            self._kernel = RBF(self.n_features)\n        multikernel = ICM(\n            input_dim=self.n_features,\n            num_outputs=self.n_sources + 1,\n            kernel=self._kernel,\n        )\n\n        # fit model to current data\n        self._gpy_model = GPCoregionalizedRegression(x_list, y_list, kernel=multikernel)\n\n        if optimize:\n            optimize_restarts_options = self._options.get(\n                \"optimize_restarts_options\", {}\n            )\n\n            kwargs = copy.deepcopy(optimize_restarts_options)\n\n            if \"verbose\" not in optimize_restarts_options:\n                kwargs[\"verbose\"] = False\n\n            self._gpy_model.optimize_restarts(**kwargs)\n\n        self._multikernel = self._gpy_model.kern.copy()\n\n        # noise variance: each element corresponds to the noise of one output\n        self._noise_variance = self._gpy_model.likelihood.param_array\n\n    def _raw_predict(\n        self, X: np.ndarray, return_full: bool = False, with_noise: bool = False\n    ) -> Tuple[np.ndarray, np.ndarray]:\n\n        _X = X.copy()\n\n        if self._X is None:\n            mu = np.zeros((_X.shape[0], 1))\n            cov = self._kernel.K(_X)\n            var = np.diag(cov)[:, None]\n            return mu, cov if return_full else var\n\n        if self._normalize:\n            _X = self._x_normalizer.transform(_X)\n\n        # predictions are made for the last output, which corresponds to the target;\n        # prepare extended input format + associated noise model\n        _X_test = np.hstack([_X, np.ones((_X.shape[0], 1)) * self.n_sources])\n        noise_dict = {\"output_index\": _X_test[:, -1:].astype(int)}\n\n        # ensure that no negative variance is predicted\n        mu, cov = self._gpy_model.predict(\n            _X_test,\n            full_cov=return_full,\n            include_likelihood=with_noise,\n            Y_metadata=noise_dict,\n        )\n        if return_full:\n            if not is_pd(cov):\n                cov = nearest_pd(cov)\n        else:\n            cov = np.clip(cov, 1e-20, None)\n        return mu, cov\n"
  },
  {
    "path": "transopt/optimizer/model/neuralprocess.py",
    "content": "import copy\nimport numpy as np\nfrom typing import Dict, Hashable, Union, Sequence, Tuple, List\n\nfrom transopt.optimizer.model.model_base import Model\nfrom transopt.agent.registry import model_registry\n\n@model_registry.register(\"NeuralProcess\")\nclass NeuralProcess(Model):\n    def __init__(self):\n        super().__init__()\n        \n    "
  },
  {
    "path": "transopt/optimizer/model/parego.py",
    "content": "import GPy\nimport numpy as np\nfrom sklearn.preprocessing import StandardScaler\n\nfrom transopt.agent.registry import model_registry\nfrom transopt.optimizer.model.model_base import Model\n\n\n@model_registry.register(\"ParEGO\")\nclass ParEGO(Model):\n    def __init__(self, seed=0, normalize=True, **options):\n        super().__init__()\n        self.seed = seed\n        self.normalize = normalize\n        self.models = None\n        self._x_normalizer = StandardScaler() if normalize else None\n        self._y_normalizer = StandardScaler() if normalize else None\n        self._options = options\n        self.rho = 0.1\n\n    def fit(self, X, Y):\n        self._X = np.copy(X)\n        self._Y = np.copy(Y)\n        if self.normalize:\n            X = self._x_normalizer.fit_transform(X)\n            Y = self._y_normalizer.fit_transform(Y)\n        self._update_model(X, Y)\n\n    def predict(self, X, full_cov=False):\n        return self._make_prediction(X, full_cov)\n\n    def _scalarization(self, Y: np.ndarray, rho):\n        theta = np.random.random_sample(Y.shape[0])\n        sum_theta = np.sum(theta)\n        theta = theta / sum_theta\n\n        theta_f = Y.T * theta\n        max_k = np.max(theta_f, axis=1)\n        rho_sum_theta_f = rho * np.sum(theta_f, axis=1)\n\n        return max_k + rho_sum_theta_f\n    \n    def _create_model(self, X, Y):\n        kernel = GPy.kern.RBF(input_dim=X.shape[1])\n        model = GPy.models.GPRegression(X, Y, kernel=kernel, normalizer=None)\n        model[\".*Gaussian_noise.variance\"].constrain_fixed(1.0e-4)\n        model[\".*rbf.variance\"].constrain_fixed(1.0)\n        self.model = model\n\n    def _update_model(self, X, Y):\n        Y_scalar = self._scalarization(Y, self.rho)[:, np.newaxis]\n        \n        if not self.model:\n            self._create_model(X, Y_scalar)\n        else:\n            self.model.set_XY(X, Y_scalar)\n         \n        try:\n            self.model.optimize_restarts(num_restarts=1, verbose=self._options.get(\"verbose\", False), robust=True)\n        except np.linalg.linalg.LinAlgError as e:\n            print(\"Error during model optimization: \", e)\n    \n    def _make_prediction(self, X, full_cov=False):\n        pred_mean = np.zeros((X.shape[0], 0))\n        pred_var = np.zeros((X.shape[0], 0)) if not full_cov else np.zeros((0, X.shape[0], X.shape[0]))\n        \n        if self.model:\n            mean, var = self.model.predict(X, full_cov=full_cov)\n            pred_mean = np.append(pred_mean, mean, axis=1)\n            if full_cov:\n                pred_var = np.append(pred_var, [var], axis=0)\n            else:\n                pred_var = np.append(pred_var, var, axis=1)\n        \n        return pred_mean, pred_var\n "
  },
  {
    "path": "transopt/optimizer/model/pr.py",
    "content": "import numpy as np\nfrom typing import Tuple, Dict, List\nfrom sklearn.preprocessing import StandardScaler\nfrom transopt.optimizer.model.model_base import  Model\nfrom transopt.optimizer.model.utils import is_pd, nearest_pd\nfrom transopt.agent.registry import model_registry\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.preprocessing import PolynomialFeatures\n\n\n@model_registry.register('PR')\nclass PR(Model):\n    def __init__(\n        self,\n        degree: int = 10,\n        normalize: bool = True,\n        **options: dict\n    ):\n        super().__init__()\n        self._degree = degree\n        self._pr_model = None\n\n        self._normalize = normalize\n        self._x_normalizer = StandardScaler() if normalize else None\n        self._y_normalizer = StandardScaler() if normalize else None\n\n        self._options = options\n\n    def meta_fit(\n        self,\n        source_X : List[np.ndarray],\n        source_Y : List[np.ndarray],\n        **kwargs,\n    ):\n        pass\n\n    def fit(\n        self,\n        X: np.ndarray,\n        Y: np.ndarray,\n        optimize: bool = True,\n    ):\n        self._X = np.copy(X)\n        self._y = np.copy(Y)\n        self._Y = np.copy(Y)\n\n        _X = np.copy(self._X)\n        _y = np.copy(self._y)\n\n        if self._normalize:\n            _X = self._x_normalizer.fit_transform(_X)\n            _y = self._y_normalizer.fit_transform(_y)\n\n        if self._pr_model is None:\n            self._poly_features = PolynomialFeatures(degree=self._degree)\n            X_poly = self._poly_features.fit_transform(_X)\n            self._pr_model = LinearRegression()\n            self._pr_model.fit(X_poly, _y)\n        else:\n            X_poly = self._poly_features.fit_transform(_X)\n            self._pr_model.fit(X_poly, _y)\n\n    def predict(\n        self,\n        X: np.ndarray,\n    ) -> Tuple[np.ndarray, np.ndarray]:\n        if X.ndim == 1:\n            X = X.reshape(1, -1)\n\n        X_poly = self._poly_features.transform(X)\n        Y = self._pr_model.predict(X_poly)\n        return Y, None"
  },
  {
    "path": "transopt/optimizer/model/rbfn.py",
    "content": "from typing import List, Tuple\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom sklearn.cluster import KMeans\nfrom sklearn.preprocessing import StandardScaler\nfrom torch.autograd import Variable\nfrom torch.utils.data import DataLoader, Dataset\n\nfrom transopt.agent.registry import model_registry\nfrom transopt.optimizer.model.model_base import Model\n\n\nclass RegressionDataset(Dataset):\n    \"\"\"create a dataset that complies with PyTorch \"\"\"\n    def __init__(self, inputs, targets):\n        self.inputs = inputs\n        self.targets = targets\n\n    def __len__(self):\n        return len(self.inputs)\n\n    def __getitem__(self, index):\n        x = self.inputs[index]\n        y = self.targets[index]\n        return x, y\n\n\nclass RbfNet(nn.Module):\n    def __init__(self, centers, beta):\n        super(RbfNet, self).__init__()\n        self.num_centers = centers.size(0)\n        self.centers = nn.Parameter(centers)\n        self.beta = nn.Parameter(beta)\n        self.linear = nn.Linear(self.num_centers, 1)\n        nn.init.xavier_uniform_(self.linear.weight)\n\n    def kernel_fun(self, batches):\n        n_input = batches.size(0)\n        A = self.centers.view(self.num_centers, -1).repeat(n_input, 1, 1)\n        B = batches.view(n_input, -1).unsqueeze(1).repeat(1, self.num_centers, 1)\n        C = torch.exp(-self.beta.mul((A - B).pow(2).sum(2, keepdims=False).sqrt()))\n        return C\n\n    def forward(self, x):\n        x = self.kernel_fun(x)\n        x = self.linear(x)\n        return x\n\n\nclass rbfn(object):\n    def __init__(self, dataset, max_epoch=30, batch_size=5, lr=0.01, num_centers=5, show_details=False):\n        self.max_epoch = max_epoch\n        self.batch_size = batch_size\n        self.lr = lr\n        self.num_centers = num_centers\n        self.dim = dataset.inputs.shape[1]\n\n        # create the DataLoader for training\n        self.dataset = dataset\n        self.data_loader = DataLoader(dataset=dataset,\n                                      batch_size=self.batch_size,\n                                      shuffle=True,\n                                      num_workers=1)\n\n        # cluster\n        self.centers = self.cluster()\n        self.beta = self.calculate_beta()\n        # create Rbf network\n        self.model = RbfNet(self.centers, self.beta)\n        self.optimizer = optim.Adam(self.model.parameters(), lr=self.lr)\n        self.loss_fun = nn.MSELoss()\n        self.avg_loss = 0\n        self.show_details = show_details\n\n    def train(self):\n        self.model.train()\n        for epoch in range(self.max_epoch):\n            self.avg_loss = 0\n            total_batch = len(self.dataset) // self.batch_size\n\n            for i, (input, output) in enumerate(self.data_loader):\n                X = Variable(input.view(-1, self.dim))\n                Y = Variable(output)\n\n                self.optimizer.zero_grad()\n                Y_prediction = self.model(X)\n                loss = self.loss_fun(Y_prediction, Y)\n                loss.backward()\n                self.optimizer.step()\n                self.avg_loss += loss / total_batch\n            if self.show_details:\n                print(\"[Epoch: {:>4}] loss = {:>.9}\".format(epoch + 1, self.avg_loss))\n        print(\"[*] Training finished! Loss: {:.9f}\".format(self.avg_loss))\n\n    def predict(self, x):\n        self.model.eval()\n        x = torch.from_numpy(x)\n        x = Variable(x)\n        y = self.model(x)\n        return y.data.numpy()\n\n    def cluster(self):\n        kmeans = KMeans(n_clusters=self.num_centers)\n        kmeans.fit(self.dataset.inputs)\n        centers = kmeans.cluster_centers_\n        return torch.from_numpy(centers)\n\n    def calculate_beta(self):\n        r2 = torch.ones(1, self.num_centers)\n        for i, center in enumerate(self.centers):\n            distances = torch.linalg.norm(self.centers - center, axis=1)\n            nearest_two_neighbors_indices = torch.argsort(distances)[:2]\n            r2[0][i] = torch.sum(distances[nearest_two_neighbors_indices]**2) / 2\n        beta = 1 / r2\n        return beta\n\n    def update_dataset(self, dataset):\n        self.dataset = dataset\n        self.data_loader = DataLoader(dataset=dataset,\n                                      batch_size=self.batch_size,\n                                      shuffle=True,\n                                      num_workers=1)\n\n\n\n@model_registry.register('RBFN')\nclass RBFN(Model):\n    def __init__(\n        self,\n        max_epoch: int = 30,\n        batch_size: int = 1,\n        lr: float = 0.01,\n        num_centers: int = 10,\n        show_details: bool = False,\n        normalize: bool = True,\n        **options: dict\n    ):\n        super().__init__()\n        self._max_epoch = max_epoch\n        self._batch_size = batch_size\n        self._lr = lr\n        self._num_centers = num_centers\n        self._rbfn_model = None\n        self._show_details = show_details\n\n        self._normalize = normalize\n        self._x_normalizer = StandardScaler() if normalize else None\n        self._y_normalizer = StandardScaler() if normalize else None\n\n        self._options = options\n\n    def meta_fit(\n        self,\n        source_X : List[np.ndarray],\n        source_Y : List[np.ndarray],\n        **kwargs,\n    ):\n        pass\n\n    def fit(\n        self,\n        X: np.ndarray,\n        Y: np.ndarray,\n        optimize: bool = True,\n    ):\n        self._X = np.copy(X)\n        self._y = np.copy(Y)\n        self._Y = np.copy(Y)\n\n        _X = np.copy(self._X)\n        _y = np.copy(self._y)\n\n        if self._normalize:\n            _X = self._x_normalizer.fit_transform(_X)\n            _y = self._y_normalizer.fit_transform(_y)\n\n        if self._rbfn_model is None:\n            dataset = RegressionDataset(torch.from_numpy(_X), torch.from_numpy(_y))\n            self._rbfn_model = rbfn(\n                dataset=dataset,\n                max_epoch=self._max_epoch,\n                batch_size=self._batch_size,\n                lr=self._lr,\n                num_centers=self._num_centers,\n                show_details=self._show_details,\n            )\n        else:\n            dataset = RegressionDataset(torch.from_numpy(_X), torch.from_numpy(_y))\n            self._rbfn_model.update_dataset(dataset)\n        \n        try:\n            self._rbfn_model.train()\n        except np.linalg.LinAlgError as e:\n            print('Error: np.linalg.LinAlgError')\n\n    def predict(\n        self,\n        X: np.ndarray,\n    ) -> Tuple[np.ndarray, np.ndarray]:\n        if X.ndim == 1:\n            X = X[None, :]\n        \n        Y = self._rbfn_model.predict(X)\n        return Y, None"
  },
  {
    "path": "transopt/optimizer/model/rf.py",
    "content": "# Copyright (c) 2021 Robert Bosch GmbH\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published\n# by the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nimport copy\nimport numpy as np\nfrom typing import Tuple, Dict, List\nfrom sklearn.preprocessing import StandardScaler\n\nfrom transopt.optimizer.model.model_base import  Model\nfrom transopt.agent.registry import model_registry\nfrom sklearn.ensemble import RandomForestRegressor\n\n\n@model_registry.register('RF')\nclass RF(Model):\n    def __init__(\n        self,\n        name = 'RandomForest',\n        num_estimators = 100,\n        seed = 0,\n        normalize: bool = True,\n        **options: dict\n    ):\n        \"\"\"Initialize the Method.\n        \"\"\"\n        super().__init__()\n        self.name = name\n        self.num_estimators = num_estimators\n\n        self.model = RandomForestRegressor(\n            n_estimators=100,\n            max_features='sqrt',\n            bootstrap=True,\n            random_state=seed\n        )\n\n        self._normalize = normalize\n        self._x_normalizer = StandardScaler() if normalize else None\n        self._y_normalizer = StandardScaler() if normalize else None\n\n        self._options = options\n\n    def meta_fit(\n        self,\n        source_X : List[np.ndarray],\n        source_Y : List[np.ndarray],\n        **kwargs,\n    ):\n        pass\n\n    def fit(\n        self,\n        X: np.ndarray,\n        Y: np.ndarray,\n        optimize: bool = True,\n    ):\n        self._X = np.copy(X)\n        self._y = np.copy(Y)\n        self._Y = np.copy(Y)\n\n        _X = np.copy(self._X)\n        _y = np.copy(self._y)\n\n        if self._normalize:\n            _X = self._x_normalizer.fit_transform(_X)\n            _y = self._y_normalizer.fit_transform(_y)\n        self.model.fit(_X, _y)\n\n\n    def predict(\n        self, X, return_full: bool = False, with_noise: bool = False\n    ) -> Tuple[np.ndarray, np.ndarray]:\n        mean, var = self._raw_predict(X, return_full, with_noise)\n\n        return mean, var\n\n    def _raw_predict(\n        self, X, return_full: bool = False, with_noise: bool = False\n    ) -> Tuple[np.ndarray, np.ndarray]:\n        \"\"\"Predict functions distribution(s) for given test point(s) without taking into\n        account data normalization. If `self._normalize` is `False`, return the same as\n        `self.predict()`.\n\n        Same input/output as `self.predict()`.\n        \"\"\"\n        _X_test = X.copy()\n\n        mu = self.model.predict(_X_test)\n        cov = self._raw_predic_var(_X_test, self.model, mu)\n        return mu[:,np.newaxis], cov[:,np.newaxis]\n\n    def _raw_predic_var(self, X, trees, predictions, min_variance=0.0):\n        # This derives std(y | x) as described in 4.3.2 of arXiv:1211.0906\n        std = np.zeros(len(X))\n\n        for tree in trees:\n            var_tree = tree.tree_.impurity[tree.apply(X)]\n\n            var_tree[var_tree < min_variance] = min_variance\n            mean_tree = tree.predict(X)\n            std += var_tree + mean_tree ** 2\n\n        std /= len(trees)\n        std -= predictions ** 2.0\n        std[std < 0.0] = 0.0\n        std = std ** 0.5\n        return std\n\n    def sample(\n        self, X, size: int = 1, with_noise: bool = False\n    ) -> np.ndarray:\n        \"\"\"Perform model inference.\n\n        Sample functions from the posterior distribution for the given test points.\n\n        Args:\n            data: Input data to predict on. `shape = (n_points, n_features)`\n            size: Number of functions to sample.\n            with_noise: If `False`, the latent function `f` is considered. If `True`,\n                the observed function `y` that includes the noise variance is\n                considered.\n\n        Returns:\n            Sampled function value for every input. `shape = (n_points, size)`\n        \"\"\"\n        mean, cov = self.predict(X, return_full=True, with_noise=with_noise)\n        mean = mean.flatten()\n        sample = np.random.multivariate_normal(mean, cov, size).T\n        return sample\n\n    def get_fmin(self):\n        if self._normalize:\n            return np.min(self._y_normalizer.inverse_transform(self._y))\n        else:\n            return np.min(self._y)"
  },
  {
    "path": "transopt/optimizer/model/rgpe.py",
    "content": "#Practical gaussian process\nimport copy\nfrom typing import Dict, List, Union, Sequence\n\nimport GPy\nimport numpy as np\nfrom GPy.kern import RBF, Kern\n\nfrom transopt.agent.registry import model_registry\nfrom transopt.optimizer.model.gp import GP\nfrom transopt.optimizer.model.model_base import Model\n\n\ndef roll_col(X: np.ndarray, shift: int) -> np.ndarray:\n    \"\"\"\n    Rotate columns to right by shift.\n    \"\"\"\n    return np.concatenate((X[:, -shift:], X[:, :-shift]), axis=1)\n\n\ndef compute_ranking_loss(\n    f_samps: np.ndarray,\n    target_y: np.ndarray,\n    target_model: bool,\n) -> np.ndarray:\n    \"\"\"\n    Compute ranking loss for each sample from the posterior over target points.\n    \"\"\"\n    y_stack = np.tile(target_y.reshape((-1, 1)), f_samps.shape[0]).transpose()\n    rank_loss = np.zeros(f_samps.shape[0])\n    if not target_model:\n        for i in range(1, target_y.shape[0]):\n            rank_loss += np.sum(\n                (roll_col(f_samps, i) < f_samps) ^ (roll_col(y_stack, i) < y_stack),\n                axis=1\n            )\n    else:\n        for i in range(1, target_y.shape[0]):\n            rank_loss += np.sum(\n                (roll_col(f_samps, i) < y_stack) ^ (roll_col(y_stack, i) < y_stack),\n                axis=1\n            )\n\n    return rank_loss\n\n\n@model_registry.register('RGPE')\nclass RGPE(Model):\n    def __init__(\n            self,\n            kernel: Kern = None,\n            noise_variance: float = 1.0,\n            normalize: bool = True,\n            Seed = 0,\n            sampling_mode: str = 'bootstrap',\n            weight_dilution_strategy = 'probabilistic',\n            **options: dict,\n    ):\n        super().__init__()\n        # GP on difference between target data and last source data set\n        self._noise_variance = noise_variance\n        self._metadata = {}\n        self._source_gps = {}\n        self._source_gp_weights = {}\n        self.sampling_mode = sampling_mode\n        self._normalize = normalize\n        self.Seed = Seed\n        self.rng = np.random.RandomState(self.Seed)\n        self.weight_dilution_strategy = weight_dilution_strategy\n\n        self.target_model = None\n        self._target_model_weight = 1\n    \n    \n    def _meta_fit_single_gp(\n        self,\n        X : np.ndarray,\n        Y : np.ndarray,\n        optimize: bool,\n    ) -> GP:\n        \"\"\"Train a new source GP on `data`.\n\n        Args:\n            data: The source dataset.\n            optimize: Switch to run hyperparameter optimization.\n\n        Returns:\n            The newly trained GP.\n        \"\"\"\n        self.n_features = X.shape[1]\n                \n        kernel = RBF(self.n_features, ARD=True)\n        new_gp = GP(\n            kernel, noise_variance=self._noise_variance\n        )\n        new_gp.fit(\n            X = X,\n            Y = Y,\n            optimize = optimize,\n        )\n        return new_gp\n    \n    def meta_fit(self,\n                source_X : List[np.ndarray],\n                source_Y : List[np.ndarray],\n                optimize: Union[bool, Sequence[bool]] = True):\n        # metadata, _ = SourceSelection.the_k_nearest(source_datasets)\n\n        self._metadata = {'X': source_X, 'Y':source_Y}\n        self._source_gps = {}\n        \n        \n        assert isinstance(optimize, bool) or isinstance(optimize, list)\n        if isinstance(optimize, list):\n            assert len(source_X) == len(optimize)\n        optimize_flag = copy.copy(optimize)\n\n        if isinstance(optimize_flag, bool):\n            optimize_flag = [optimize_flag] * len(source_X)\n        \n        for i in range(len(source_X)):\n            new_gp = self._meta_fit_single_gp(\n                source_X[i],\n                source_Y[i],\n                optimize=optimize_flag[i],\n            )\n            self._source_gps[i] = new_gp\n\n        self._calculate_weights()\n\n\n    def fit(self, \n            X: np.ndarray,\n            Y: np.ndarray,\n            optimize: bool = False):\n\n        self._X = copy.deepcopy(X)\n        self._Y = copy.deepcopy(Y)\n\n        self.n_samples, n_features = self._X.shape\n        if self.n_features != n_features:\n            raise ValueError(\"Number of features in model and input data mismatch.\")\n\n        kern = GPy.kern.RBF(self.n_features, ARD=False)\n\n        self.target_model = GPy.models.GPRegression(self._X, self._Y, kernel=kern)\n        self.target_model['Gaussian_noise.*variance'].constrain_bounded(1e-9, 1e-3)\n\n        try:\n            self.target_model.optimize_restarts(num_restarts=1, verbose=False, robust=True)\n        except np.linalg.linalg.LinAlgError as e:\n            # break\n            print('Error: np.linalg.linalg.LinAlgError')\n\n        self._calculate_weights()\n\n    def predict(\n        self, X, return_full: bool = False, with_noise: bool = False\n    ):\n\n        X_test = X\n        n_models = len(self._source_gp_weights)\n        if self._target_model_weight > 0:\n            n_models += 1\n        n_sample = X_test.shape[0]\n        means = np.empty((n_models, n_sample, 1))\n        weights = np.empty((n_models, 1))\n        if return_full == False:\n            vars_ = np.empty((n_models, n_sample, 1))\n        else:\n            vars_ = np.empty((n_models, n_sample, n_sample))\n        for task_uid, weight in enumerate(self._source_gp_weights):\n            means[task_uid], vars_[task_uid] = self._source_gps[task_uid].predict(X_test)\n            weights[task_uid] = weight\n        if self._target_model_weight > 0:\n            means[-1], vars_[-1] = self.target_model.predict(X_test)\n            weights[-1] = self._target_model_weight\n        weights = weights[:,:,np.newaxis]\n        mean = np.sum(weights * means, axis=0)\n        var = np.sum(weights ** 2 * vars_, axis=0)\n        return mean, var\n\n\n    def _calculate_weights(self, alpha: float = 0.0):\n        if len(self._source_gps) == 0:\n            self._target_model_weight = 1\n            return\n\n        if self._X is None:\n            weight = 1 / len(self._source_gps)\n            self._source_gp_weights = [weight for task_uid in self._source_gps]\n            self._target_model_weight = 0\n            return\n        \n        kernel = RBF(self.n_features, ARD=True)\n        if self.sampling_mode == 'bootstrap':\n            predictions = []\n            for model_idx in range(len(self._source_gps)):\n                model = self._source_gps[model_idx]\n                predictions.append(model.predict(self._X)[0].flatten()) # ndarray(n,)\n\n            masks = np.eye(len(self._X), dtype=bool)\n            train_x_cv = np.stack([self._X[~m] for m in masks])\n            train_y_cv = np.stack([self._Y[~m] for m in masks])\n            test_x_cv = np.stack([self._X[m] for m in masks])\n            \n            model = GP(copy.deepcopy(kernel), noise_variance=self._noise_variance)\n\n            loo_prediction = []\n            for i in range(self._Y.shape[0]):\n                model.fit(train_x_cv[i], train_y_cv[i], optimize=False)\n                loo_prediction.append(model.predict(test_x_cv[i])[0][0][0])\n            predictions.append(loo_prediction)\n            predictions = np.array(predictions)\n\n            bootstrap_indices = self.rng.choice(predictions.shape[1],\n                                           size=(self.n_samples, predictions.shape[1]),\n                                           replace=True)\n\n            bootstrap_predictions = []\n            bootstrap_targets = self._Y[bootstrap_indices].reshape((self.n_samples, len(self._Y)))\n            for m in range(len(self._source_gps) + 1):\n                bootstrap_predictions.append(predictions[m, bootstrap_indices])\n\n            ranking_losses = np.zeros((len(self._source_gps) + 1, self.n_samples))\n            for i in range(len(self._source_gps)):\n\n                for j in range(len(self._Y)):\n                    ranking_losses[i] += np.sum(\n                        (\n                            roll_col(bootstrap_predictions[i], j) < bootstrap_predictions[i])\n                            ^ (roll_col(bootstrap_targets, j) < bootstrap_targets\n                        ), axis=1\n                    )\n            for j in range(len(self._Y)):\n                ranking_losses[-1] += np.sum(\n                    (\n                        (roll_col(bootstrap_predictions[-1], j) < bootstrap_targets)\n                        ^ (roll_col(bootstrap_targets, j) < bootstrap_targets)\n                    ), axis=1\n                )\n        # elif self.sampling_mode in ['simplified', 'correct']:\n        #     # Use the original strategy as described in v1: https://arxiv.org/pdf/1802.02219v1.pdf\n        #     ranking_losses = []\n        #     # compute ranking loss for each base model\n        #     for model_idx in range(len(self.source_gps)):\n        #         model = self.source_gps[model_idx]\n        #         # compute posterior over training points for target task\n        #         f_samps = sample_sobol(model, self._X, self.n_samples, self.rng.randint(10000))\n        #         # compute and save ranking loss\n        #         ranking_losses.append(compute_ranking_loss(f_samps, self._Y, target_model=False))\n        #\n        #     # compute ranking loss for target model using LOOCV\n        #     if self.sampling_mode == 'simplified':\n        #         # Independent draw of the leave one out sample, other \"samples\" are noise-free and the\n        #         # actual observation\n        #         f_samps = get_target_model_loocv_sample_preds(self._X, self._Y, self.n_samples, target_model,\n        #                                                       self.rng.randint(10000))\n        #         ranking_losses.append(compute_ranking_loss(f_samps, self._Y, target_model=True))\n        #     elif self.sampling_mode == 'correct':\n        #         # Joint draw of the leave one out sample and the other observations\n        #         ranking_losses.append(\n        #             compute_target_model_ranking_loss(train_x, train_y, num_samples, target_model,\n        #                                               rng.randint(10000))\n        #         )\n        #     else:\n        #         raise ValueError(self.sampling_mode)\n        else:\n            raise NotImplementedError(self.sampling_mode)\n\n        if isinstance(self.weight_dilution_strategy, int):\n            weight_dilution_percentile_target = self.weight_dilution_strategy\n            weight_dilution_percentile_base = 50\n        elif self.weight_dilution_strategy is None or self.weight_dilution_strategy in ['probabilistic', 'probabilistic-ld']:\n            pass\n        else:\n            raise ValueError(self.weight_dilution_strategy)\n        ranking_loss = np.array(ranking_losses)\n\n        # perform model pruning\n        p_drop = []\n        if self.weight_dilution_strategy in ['probabilistic', 'probabilistic-ld']:\n            for i in range(len(self._source_gps)):\n                better_than_target = np.sum(ranking_loss[i, :] < ranking_loss[-1, :])\n                worse_than_target = np.sum(ranking_loss[i, :] >= ranking_loss[-1, :])\n                correction_term = alpha * (better_than_target + worse_than_target)\n                proba_keep = better_than_target / (better_than_target + worse_than_target + correction_term)\n                if self.weight_dilution_strategy == 'probabilistic-ld':\n                    proba_keep = proba_keep * (1 - len(self._X) / float(self.number_of_function_evaluations))\n                proba_drop = 1 - proba_keep\n                p_drop.append(proba_drop)\n                r = self.rng.rand()\n                if r < proba_drop:\n                    ranking_loss[i, :] = np.max(ranking_loss) * 2 + 1\n        elif self.weight_dilution_strategy is not None:\n            # Use the original strategy as described in v1: https://arxiv.org/pdf/1802.02219v1.pdf\n            percentile_base = np.percentile(ranking_loss[: -1, :], weight_dilution_percentile_base, axis=1)\n            percentile_target = np.percentile(ranking_loss[-1, :], weight_dilution_percentile_target)\n            for i in range(len(self._source_gps)):\n                if percentile_base[i] >= percentile_target:\n                    ranking_loss[i, :] = np.max(ranking_loss) * 2 + 1\n\n        # compute best model (minimum ranking loss) for each sample\n        # this differs from v1, where the weight is given only to the target model in case of a tie.\n        # Here, we distribute the weight fairly among all participants of the tie.\n        minima = np.min(ranking_loss, axis=0)\n        assert len(minima) == self.n_samples\n        best_models = np.zeros(len(self._source_gps) + 1)\n        for i, minimum in enumerate(minima):\n            minimum_locations = ranking_loss[:, i] == minimum\n            sample_from = np.where(minimum_locations)[0]\n\n            for sample in sample_from:\n                best_models[sample] += 1. / len(sample_from)\n\n        # compute proportion of samples for which each model is best\n        rank_weights = best_models / self.n_samples\n\n        self._source_gp_weights = [rank_weights[task_uid] for task_uid in self._source_gps]\n        self._target_model_weight = rank_weights[-1]\n\n        return rank_weights, p_drop\n\n    def _calculate_weights_with_no_observations(self):\n        \"\"\"Calculate weights according to the given start Method when no target\n        task observations exist.\n        \"\"\"\n\n        first, _, _ = self._start.partition(\"-\")\n\n        if first == \"random\":\n            # do nothing, predict should not yet be used\n            return\n\n        if first == \"mean\":\n            # assign equal weights to all base models\n            weight = 1 / len(self._source_gps)\n            self._source_gp_weights = {\n                task_uid: weight for task_uid in self._source_gps\n            }\n            self._target_model_weight = 0\n            return\n\n        raise RuntimeError(f\"Predict called without observations, first = {first}\")\n\n    def _calculate_weights_with_one_observation(self):\n        \"\"\"Calculate weights according to the given start Method when only one\n        unique target task observation is available.\n        \"\"\"\n\n        _, _, second = self._start.partition(\"-\")\n\n        if second == \"random\":\n            # do nothing, predict should not be used yet\n            return\n\n        if second == \"mean\":\n            # assign equal weights to all base models and the target model\n            weight = 1 / (len(self._source_gps) + 1)\n            self._source_gp_weights = {\n                task_uid: weight for task_uid in self._source_gps\n            }\n            self._target_model_weight = weight\n            return\n\n        if second == \"weighted\":\n            # get unique observed point\n            X, indices = np.unique(self._X, axis=0, return_index=True)\n\n            # draw _n_samples for each unique observed point from each\n            # base model\n            all_samples = np.empty((len(self._source_gps), self._n_samples))\n            for i, task_uid in enumerate(self._source_gps):\n                model = self._source_gps[task_uid]\n                samples = model.sample(\n                    X, size=self._n_samples, with_noise=True\n                )\n                all_samples[i] = samples\n\n            # compare drawn samples to observed values\n            y = self._y[indices]\n            diff = np.abs(all_samples - y)\n\n            # get base model with lowest absolute difference for each sample\n            best = np.argmin(diff, axis=0)\n\n            # compute weight as proportion of samples where a base model is best\n            occurences = np.bincount(best, minlength=len(self._source_gps))\n            weights = occurences / self._n_samples\n            self._source_gp_weights = dict(zip(self._source_gps, weights))\n            self._target_model_weight = 0\n            return\n\n        raise RuntimeError(\n            f\"Weight calculation with one observation, second = {second}\"\n        )\n\n    def _update_meta_data(self, *gps: GPy.models.GPRegression):\n        \"\"\"Cache the meta data after meta training.\"\"\"\n        n_models = len(self._source_gps)\n        for task_uid, gp in enumerate(gps):\n            self._source_gps[n_models + task_uid] = gp\n    def meta_update(self):\n        self._update_meta_data(self.target_model)\n\n    def set_XY(self, Data:Dict):\n        self._X = copy.deepcopy(Data['X'])\n        self._Y = copy.deepcopy(Data['Y'])\n\n    def print_Weights(self):\n        print(f'Source weights:{self._source_gp_weights}')\n        print(f'Target weights:{self._target_model_weight}')\n\n    def get_Weights(self):\n        weights = self._source_gp_weights.copy()\n        weights.append(self._target_model_weight)\n        return weights\n\n\n    def loss(self, task_uid: int) -> np.ndarray:\n        model = self._source_gps[task_uid]\n        X = self._X\n        y = self._Y\n        samples = model.sample(X, size=self.n_samples, with_noise=True)\n        sample_comps = samples[:, np.newaxis, :] < samples\n        target_comps = np.tile(y[:, np.newaxis, :] < y, self.n_samples)\n        return np.sum(sample_comps ^ target_comps, axis=(1, 0))\n\n    def posterior_samples_f(self,X, size=10, **predict_kwargs):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: The points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :returns: set of simulations\n        :rtype: np.ndarray (Nnew x D x samples)\n        \"\"\"\n\n\n        predict_kwargs[\"full_cov\"] = True  # Always use the full covariance for posterior samples.\n        m, v = self._raw_predict(X,  **predict_kwargs)\n\n        def sim_one_dim(m, v):\n            return np.random.multivariate_normal(m, v, size).T\n\n        return sim_one_dim(m.flatten(), v)[:, np.newaxis, :]\n\n\n    def posterior_samples(self, X, size=10, Y_metadata=None, likelihood=None, **predict_kwargs):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: the points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim.)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :param noise_model: for mixed noise likelihood, the noise model to use in the samples.\n        :type noise_model: integer.\n        :returns: Ysim: set of simulations,\n        :rtype: np.ndarray (D x N x samples) (if D==1 we flatten out the first dimension)\n        \"\"\"\n\n\n        fsim = self.posterior_samples_f(X, size, **predict_kwargs)\n        if likelihood is None:\n            likelihood = self.likelihood\n        if fsim.ndim == 3:\n            for d in range(fsim.shape[1]):\n                fsim[:, d] = likelihood.samples(fsim[:, d], Y_metadata=Y_metadata)\n        else:\n            fsim = likelihood.samples(fsim, Y_metadata=Y_metadata)\n        return fsim\n\n    def get_fmin(self):\n\n        return np.min(self._Y)"
  },
  {
    "path": "transopt/optimizer/model/sgpt.py",
    "content": "import copy\nfrom typing import Dict, List, Sequence, Union\n\nimport GPy\nimport numpy as np\nfrom GPy.kern import RBF, Kern\n\nfrom transopt.agent.registry import model_registry\nfrom transopt.optimizer.model.gp import GP\nfrom transopt.optimizer.model.model_base import Model\n\n\ndef roll_col(X: np.ndarray, shift: int) -> np.ndarray:\n    \"\"\"\n    Rotate columns to right by shift.\n    \"\"\"\n    return np.concatenate((X[:, -shift:], X[:, :-shift]), axis=1)\n\n@model_registry.register(\"SGPT\")\nclass SGPT(Model):\n    def __init__(\n            self,\n            kernel: Kern = None,\n            noise_variance: float = 1.0,\n            normalize: bool = True,\n            Seed = 0,\n            bandwidth: float = 1,\n            **options: dict,\n    ):\n        super().__init__()\n        # GP on difference between target data and last source data set\n        self._noise_variance = noise_variance\n        self._metadata = {}\n        self._source_gps = {}\n        self._source_gp_weights = {}\n        self._normalize = normalize\n        self.Seed = Seed\n        self.rng = np.random.RandomState(self.Seed)\n        \n        self._metadata = {}\n        self._source_gps = {}\n        self._source_gp_weights = {}\n        self.bandwidth =bandwidth\n\n        self._target_model = None\n        self._target_model_weight = 1\n    \n    \n    def _meta_fit_single_gp(\n        self,\n        X : np.ndarray,\n        Y : np.ndarray,\n        optimize: bool,\n    ) -> GP:\n        \"\"\"Train a new source GP on `data`.\n\n        Args:\n            data: The source dataset.\n            optimize: Switch to run hyperparameter optimization.\n\n        Returns:\n            The newly trained GP.\n        \"\"\"\n        self.n_features = X.shape[1]\n                \n        kernel = RBF(self.n_features, ARD=True)\n        new_gp = GP(\n            kernel, noise_variance=self._noise_variance\n        )\n        new_gp.fit(\n            X = X,\n            Y = Y,\n            optimize = optimize,\n        )\n        return new_gp\n    \n    def meta_fit(self,\n            source_X : List[np.ndarray],\n            source_Y : List[np.ndarray],\n            optimize: Union[bool, Sequence[bool]] = True):\n        # metadata, _ = SourceSelection.the_k_nearest(source_datasets)\n\n        self._metadata = {'X': source_X, 'Y':source_Y}\n        self._source_gps = {}\n        \n        \n        assert isinstance(optimize, bool) or isinstance(optimize, list)\n        if isinstance(optimize, list):\n            assert len(source_X) == len(optimize)\n        optimize_flag = copy.copy(optimize)\n\n        if isinstance(optimize_flag, bool):\n            optimize_flag = [optimize_flag] * len(source_X)\n        \n        for i in range(len(source_X)):\n            new_gp = self._meta_fit_single_gp(\n                source_X[i],\n                source_Y[i],\n                optimize=optimize_flag[i],\n            )\n            self._source_gps[i] = new_gp\n\n        self._calculate_weights()\n\n\n    def fit(self, \n            X: np.ndarray,\n            Y: np.ndarray,\n            optimize: bool = False):\n\n        self._X = copy.deepcopy(X)\n        self._Y = copy.deepcopy(Y)\n\n        self.n_samples, n_features = self._X.shape\n        if self.n_features != n_features:\n            raise ValueError(\"Number of features in model and input data mismatch.\")\n\n        kern = GPy.kern.RBF(self.n_features, ARD=False)\n\n        self._target_model = GPy.models.GPRegression(self._X, self._Y, kernel=kern)\n        self._target_model['Gaussian_noise.*variance'].constrain_bounded(1e-9, 1e-3)\n\n        try:\n            self._target_model.optimize_restarts(num_restarts=1, verbose=False, robust=True)\n        except np.linalg.linalg.LinAlgError as e:\n            # break\n            print('Error: np.linalg.linalg.LinAlgError')\n\n        self._calculate_weights()\n\n\n    def predict(self, X, return_full: bool = False, with_noise: bool = False):\n        X_test = X\n        n_models = len(self._source_gp_weights)\n        if self._target_model_weight > 0:\n            n_models += 1\n        n_sample = X_test.shape[0]\n        means = np.empty((n_models, n_sample, 1))\n        weights = np.empty((n_models, n_sample))\n        if return_full == False:\n            vars_ = np.empty((n_models, n_sample, 1))\n        else:\n            vars_ = np.empty((n_models, n_sample, n_sample))\n        for task_uid, weight in enumerate(self._source_gp_weights):\n            means[task_uid], vars_[task_uid] = self._source_gps[task_uid].predict(X_test)\n            weights[task_uid] = weight\n        if self._target_model_weight > 0:\n            means[-1], vars_[-1] = self._target_model.predict(X_test)\n            weights[-1] = self._target_model_weight\n\n        weights = weights[:,:,np.newaxis]\n        mean = np.sum(weights * means, axis=0)\n        return mean, vars_[-1]\n\n    def Epanechnikov_kernel(self, X1, X2):\n        diff_matrix = X1 - X2\n        u = np.linalg.norm(diff_matrix, ord=2) / self.bandwidth**2  # 计算归一化距离\n        if u < 1:\n            weight = 0.75 * (1 - u**2)  # 根据 Epanechnikov 核计算权重\n        else:\n            weight = 0 \n        return weight\n    \n    def _calculate_weights(self, alpha: float = 0.0):\n        if self._X is None:\n            weight = 1 / len(self._source_gps)\n            self._source_gp_weights = [weight for task_uid in self._source_gps]\n            self._target_model_weight = 0\n            return\n\n        predictions = []\n        for model_idx in range(len(self._source_gps)):\n            model = self._source_gps[model_idx]\n            predictions.append(model.predict(self._X)[0].flatten())  # ndarray(n,)\n\n\n        predictions.append(self._target_model.predict(self._X)[0].flatten())\n        predictions = np.array(predictions)\n\n        bootstrap_indices = self.rng.choice(predictions.shape[1],\n                                            size=(self.n_samples, predictions.shape[1]),\n                                            replace=True)\n\n        bootstrap_predictions = []\n        bootstrap_targets = self._Y[bootstrap_indices].reshape((self.n_samples, len(self._Y)))\n        for m in range(len(self._source_gps) + 1):\n            bootstrap_predictions.append(predictions[m, bootstrap_indices])\n\n        ranking_losses = np.zeros((len(self._source_gps) + 1, self.n_samples))\n        for i in range(len(self._source_gps)):\n            for j in range(1, len(self._Y)):\n                ranking_losses[i] += np.sum(\n                    (\n                        ~(roll_col(bootstrap_predictions[i], j) < bootstrap_predictions[i])\n                    ^ (roll_col(bootstrap_targets, j) < bootstrap_targets)\n                       ), axis=1\n\n                )\n        for j in range(1, len(self._Y)):\n            ranking_losses[-1] += np.sum(\n                (\n                        ~((roll_col(bootstrap_predictions[-1], j) < bootstrap_targets)\n                        ^ (roll_col(bootstrap_targets, j) < bootstrap_targets))\n                ), axis=1\n            )\n        total_compare = len(self._Y) *(len(self._Y - 1))\n        ranking_loss = np.array(ranking_losses) / total_compare\n\n        weights = [self.Epanechnikov_kernel(ranking_loss[task_uid], ranking_loss[-1]) for task_uid in self._source_gps]\n        weights.append(1.0)\n        weights = np.array(weights)/np.sum(weights)\n        self._source_gp_weights = [weights[task_uid] for task_uid in self._source_gps]\n        self._target_model_weight = weights[-1]\n\n    def posterior_samples_f(self,X, size=10, **predict_kwargs):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: The points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :returns: set of simulations\n        :rtype: np.ndarray (Nnew x D x samples)\n        \"\"\"\n\n\n        predict_kwargs[\"full_cov\"] = True  # Always use the full covariance for posterior samples.\n        m, v = self._raw_predict(X,  **predict_kwargs)\n\n        def sim_one_dim(m, v):\n            return np.random.multivariate_normal(m, v, size).T\n\n        return sim_one_dim(m.flatten(), v)[:, np.newaxis, :]\n\n\n    def posterior_samples(self, X, size=10, Y_metadata=None, likelihood=None, **predict_kwargs):\n        \"\"\"\n        Samples the posterior GP at the points X.\n\n        :param X: the points at which to take the samples.\n        :type X: np.ndarray (Nnew x self.input_dim.)\n        :param size: the number of a posteriori samples.\n        :type size: int.\n        :param noise_model: for mixed noise likelihood, the noise model to use in the samples.\n        :type noise_model: integer.\n        :returns: Ysim: set of simulations,\n        :rtype: np.ndarray (D x N x samples) (if D==1 we flatten out the first dimension)\n        \"\"\"\n\n\n        fsim = self.posterior_samples_f(X, size, **predict_kwargs)\n        if likelihood is None:\n            likelihood = self.likelihood\n        if fsim.ndim == 3:\n            for d in range(fsim.shape[1]):\n                fsim[:, d] = likelihood.samples(fsim[:, d], Y_metadata=Y_metadata)\n        else:\n            fsim = likelihood.samples(fsim, Y_metadata=Y_metadata)\n        return fsim\n\n    def get_fmin(self):\n\n        return np.min(self._Y)"
  },
  {
    "path": "transopt/optimizer/model/smsego.py",
    "content": "import GPy\nimport numpy as np\nfrom sklearn.preprocessing import StandardScaler\n\nfrom transopt.agent.registry import model_registry\nfrom transopt.optimizer.model.model_base import Model\n\n\n@model_registry.register(\"SMSEGO\")\nclass SMSEGO(Model):\n    def __init__(self, seed=0, normalize=True, **options):\n        super().__init__()\n        self.seed = seed\n        self.normalize = normalize\n        self.models = []\n        self._x_normalizer = StandardScaler() if normalize else None\n        self._y_normalizer = StandardScaler() if normalize else None\n        self._options = options\n        np.random.seed(self.seed)\n\n    def fit(self, X, Y):\n        self._X = np.copy(X)\n        self._Y = np.copy(Y)\n        if self.normalize:\n            X = self._x_normalizer.fit_transform(X)\n            Y = self._y_normalizer.fit_transform(Y.T).T  # Transpose Y to normalize across objectives\n        self._create_model(X, Y)\n\n    def predict(self, X, full_cov=False):\n        return self._make_prediction(X, full_cov)\n\n    def _create_model(self, X, Y):\n        for i in range(self.num_objective):\n            kernel = GPy.kern.RBF(input_dim=X.shape[1])\n            model = GPy.models.GPRegression(X, Y[i][:, np.newaxis], kernel=kernel)\n            model[\".*Gaussian_noise.variance\"].constrain_fixed(1.0e-4)\n            model[\".*rbf.variance\"].constrain_fixed(1.0)\n            self.models.append(model)\n\n    def _update_model(self, X, Y):\n        if not self.models:\n            self._create_model(X, Y)\n        else:\n            for i, model in enumerate(self.models):\n                model.set_XY(X, Y[i][:, np.newaxis])\n        \n        try:\n            for model in self.models:\n                model.optimize_restarts(num_restarts=1, verbose=self._options.get(\"verbose\", False), robust=True)\n        except np.linalg.linalg.LinAlgError as e:\n            print(\"Error during model optimization: \", e)\n\n    def _make_prediction(self, X, full_cov=False):\n        if len(X.shape) == 1:\n            X = X[np.newaxis, :]\n        pred_mean = np.zeros((X.shape[0], 0))\n        pred_var = np.zeros((X.shape[0], 0)) if not full_cov else np.zeros((0, X.shape[0], X.shape[0]))\n        \n        for model in self.models:\n            mean, var = model.predict(X, full_cov=full_cov)\n            pred_mean = np.append(pred_mean, mean, axis=1)\n            if full_cov:\n                pred_var = np.append(pred_var, [var], axis=0)\n            else:\n                pred_var = np.append(pred_var, var, axis=1)\n        \n        return pred_mean, pred_var\n"
  },
  {
    "path": "transopt/optimizer/model/utils.py",
    "content": "import itertools\nfrom typing import List, Union, Tuple\n\nimport numpy as np\nimport scipy\nfrom GPy.kern import Fixed, BasisFuncKernel\n\n\n\ndef is_pd(a: np.ndarray) -> bool:\n    \"\"\"Check whether matrix `a` is positive definite via Cholesky decomposition.\n\n    Args:\n        a: Input matrix.\n\n    Returns:\n        `True` if input matrix is positive-definite, `False` otherwise.\n    \"\"\"\n\n    try:\n        _ = np.linalg.cholesky(a)\n        return True\n    except np.linalg.LinAlgError:\n        return False\n\n\ndef nearest_pd(a: np.ndarray) -> np.ndarray:\n    \"\"\"Calculate the nearest positive-definite matrix to a given symmetric matrix `a`.\n\n    Nearest is defined by the Frobenius norm.\n\n    Args:\n        a: Symmetric matrix. `shape = (n, n)`\n\n    Returns:\n        The nearest positive-definite matrix to the input symmetric matrix `a`.\n        `shape = (n, n)`\n    \"\"\"\n    # compute eigendecomposition of symmetric matrix a\n    w, v = np.linalg.eigh(a)\n\n    # account for floating-point accuracy\n    spacing = np.spacing(np.linalg.norm(a))\n\n    # clip the eigenvalues at zero\n    wp = np.clip(w, spacing, None)\n\n    return np.dot(v, np.dot(np.diag(wp), np.transpose(v)))\n\n\ndef compute_cholesky(matrix: np.ndarray) -> np.ndarray:\n    \"\"\"Calculate the Cholesky decomposition of a matrix.\n\n    If the matrix is singular, a small constant is added to the diagonal of the matrix.\n    This Method is therefore useful for the calculation of GP posteriors.\n\n    Args:\n        matrix: The input matrix. `shape = (n_points, n_points)`\n\n    Returns:\n        The Cholesky decomposition stored in the lower triangle.\n            `shape = (n_points, n_points)`\n    \"\"\"\n    assert len(matrix.shape) <= 2, (\n        \"The matrix has more than two input dimensions. Cholesky decomposition\"\n        \"impossible.\"\n    )\n    assert (\n        matrix.shape[0] == matrix.shape[1]\n    ), \"The matrix is not square. Cholesky decomposition impossible.\"\n\n    _matrix = np.copy(matrix)  # to avoid modifying the input\n    for k in itertools.count(start=1):\n        try:\n            chol = scipy.linalg.cholesky(_matrix, lower=True)\n        except scipy.linalg.LinAlgError:\n            # Increase eigenvalues of matrix\n            np.fill_diagonal(_matrix, _matrix.diagonal() + 10 ** k * 1e-8)\n        else:\n            return chol\n\n\nclass FixedKernel(Fixed):\n    \"\"\"Fixed covariance kernel. Serializable version of the Fixed Kernel from `GPy`.\n\n    Serialization is required to initialize a `gpy_adapter` `Model` using this kernel.\n    \"\"\"\n\n    def __init__(\n        self,\n        input_dim: int,\n        covariance_matrix: np.ndarray,\n        active_dims: List[int] = None,\n        name=\"PosteriorCov\",\n    ):\n        \"\"\"Initialize the kernel.\n\n        Args:\n            input_dim: Input dimension of the training data.\n            covariance_matrix: The fixed covariance matrix.\n            active_dims: Active dimensions.\n            name: Name of the kernel.\n        \"\"\"\n        super(FixedKernel, self).__init__(\n            input_dim=input_dim,\n            variance=1.0,\n            covariance_matrix=covariance_matrix,\n            active_dims=active_dims,\n            name=name,\n        )\n        self.variance.fix()\n\n    def to_dict(self) -> dict:\n        \"\"\"Save the kernel as a dictionary.\"\"\"\n        input_dict = super(Fixed, self)._save_to_input_dict()\n        input_dict[\"covariance_matrix\"] = self.fixed_K\n        input_dict[\"class\"] = \"GPy.kern.Fixed\"\n        input_dict.pop(\"useGPU\")\n\n        return input_dict\n\n\ndef compute_alpha(model: \"GP\", x) -> np.ndarray:\n    r\"\"\"Calculate the $\\alpha(x)$ Woodbury vector used for computing the boosted\n    covariance.\n\n    $$\n        \\alpha(x) = k(x, X)\\left(k(\\X, \\X) +\\sigma^2\\mathbb 1\\right)^{-1}$,\n    $$\n\n    where $k$ is the kernel of `model`, $X$ is the training data of `model`, and\n    $\\sigma$ is the standard deviation of the observational noise.\n\n    Args:\n        model: The Gaussian-process model.\n        x: The input data. `shape = (n_points, n_features)`\n\n    Returns:\n        The $\\alpha$ vector. `shape = (n_points, n_training_points)`\n    \"\"\"\n    L = model._gpy_model.posterior.woodbury_chol\n    X = model.X\n    k = model.compute_kernel(X, x)\n    return scipy.linalg.solve_triangular(\n        L.T, scipy.linalg.solve_triangular(L, k, lower=True)\n    )\n\n\nclass CrossTaskKernel(BasisFuncKernel):\n    \"\"\"A kernel that is one iff the X-task corresponds to one of the `task_indices`.\"\"\"\n\n    def __init__(\n        self,\n        task_indices: Union[Tuple[int, int], int, np.ndarray],\n        index_dim: int,\n        variance=1.0,\n        name=\"task_domain\",\n    ):\n        super().__init__(\n            input_dim=1,\n            variance=variance,\n            active_dims=(index_dim,),\n            ARD=False,\n            name=name,\n        )\n        self.task_indices = np.atleast_2d(np.asarray(task_indices, dtype=int))\n        assert self.task_indices.size >= 1, \"Need at least one task.\"\n\n    def _phi(self, X: np.ndarray) -> np.ndarray:\n        # atol maps our floats to tasks\n        is_domain_task = np.isclose(X, self.task_indices, atol=0.5, rtol=0)\n        return is_domain_task.any(axis=-1, keepdims=True)\n"
  },
  {
    "path": "transopt/optimizer/normalizer/__init__.py",
    "content": "from transopt.optimizer.normalizer.standerd import Standard_normalizer\nfrom transopt.optimizer.normalizer.normalizer_base import NormalizerBase"
  },
  {
    "path": "transopt/optimizer/normalizer/normalizer_base.py",
    "content": "from abc import abstractmethod, ABC\nfrom typing import Dict, Hashable\nimport numpy as np\n\nclass NormalizerBase(ABC):\n    def __init__(self, config):\n        self.config = config\n    @abstractmethod\n    def fit(self, X, Y):\n        raise NotImplementedError\n    @abstractmethod \n    def transform(self, X = None, Y = None):\n        raise NotImplementedError\n    @abstractmethod\n    def inverse_transform(self, X = None, Y = None):\n\n        raise NotImplementedError \n    "
  },
  {
    "path": "transopt/optimizer/normalizer/standerd.py",
    "content": "import numpy as np\n\nfrom sklearn.preprocessing import StandardScaler\n\nfrom transopt.agent.registry import normalizer_registry\nfrom transopt.optimizer.normalizer.normalizer_base import NormalizerBase\n\n\n\n# class XScaler:\n#     def __init__(self, ranges):\n#         self.ranges = np.array(ranges)\n#         self.min = self.ranges[:, 0]\n#         self.max = self.ranges[:, 1]\n    \n    \n#     def transform(self, values):\n#         values = np.array(values)\n#         scaled_values = 2 * (values - self.min) / (self.max - self.min) - 1\n#         return scaled_values\n    \n#     def inverse_transform(self, scaled_values):\n#         scaled_values = np.array(scaled_values)\n#         values = (scaled_values + 1) / 2 * (self.max - self.min) + self.min\n#         return values\n    \n    \n@normalizer_registry.register(\"Standard\")\nclass Standard_normalizer(NormalizerBase):\n    def __init__(self, config, metadata =  None, metadata_info = None):\n        self.y_normalizer = StandardScaler()\n        super(Standard_normalizer, self).__init__(config)\n        \n\n    def fit(self, X, Y):\n        self.y_normalizer.fit(Y)\n            \n    def transform(self, X = None, Y = None):\n        # if X is not None:\n        #     X = self.x_normalizer.transform(X)\n        if Y is not None:\n            Y = self.y_normalizer.transform(Y)\n        return X, Y\n\n    def inverse_transform(self, X = None, Y = None):\n        # if X is not None:\n        #     X = self.x_normalizer.inverse_transform(X)\n        if Y is not None:\n            Y = self.y_normalizer.inverse_transform(Y)\n        return X, Y"
  },
  {
    "path": "transopt/optimizer/optimizer_base/EvoOptimizerBase.py",
    "content": "import abc\nimport numpy as np\nimport ConfigSpace\nimport math\nfrom typing import Union, Dict, List\nfrom transopt.optimizer.optimizer_base import OptimizerBase\nimport GPyOpt\nfrom transopt.utils.serialization import vectors_to_ndarray, output_to_ndarray\nfrom transopt.utils.Visualization import visual_oned, visual_contour\n\n\n\nclass EVOBase(OptimizerBase):\n    \"\"\"\n    The abstract Model for Evolutionary Optimization\n    \"\"\"\n    def __init__(self, config):\n        super(EVOBase, self).__init__(config=config)\n        self._X = np.empty((0,))  # Initializes an empty ndarray for input vectors\n        self._Y = np.empty((0,))\n        self.config = config\n        self.search_space = None\n        self.design_space = None\n        self.mapping = None\n        self.ini_num = None\n        self._data_handler = None\n        self.population = None\n        self.pop_size = None\n\n"
  },
  {
    "path": "transopt/optimizer/optimizer_base/__init__.py",
    "content": "# from optimizer.optimizer_base.optimizerBase import OptimizerBase\n# from optimizer.optimizer_base.bo_base import BOBase\n"
  },
  {
    "path": "transopt/optimizer/optimizer_base/base.py",
    "content": "import abc\nfrom typing import List, Dict, Union\n\nclass OptimizerBase(abc.ABC, metaclass=abc.ABCMeta):\n    \"\"\"Abstract base class for the optimizers in the benchmark. This creates a common API across all packages.\n    \"\"\"\n\n    # Every implementation package needs to specify this static variable, e.g., \"primary_import=opentuner\"\n    primary_import = None\n\n    def __init__(self, config, **kwargs):\n        \"\"\"Build wrapper class to use an optimizer in benchmark.\n\n        Parameters\n        ----------\n        config : dict-like of dict-like\n            Configuration of the optimization variables. See API description.\n        \"\"\"\n        self.config = config\n        # self.verbose = config['verbose']\n        # self.optimizer_name = config['optimizer_name']\n        # self.exp_path = config['save_path']\n\n\n\n    @abc.abstractmethod\n    def suggest(self, n_suggestions:Union[None, int] = None)->List[Dict]:\n        \"\"\"Get a suggestion from the optimizer.\n\n        Parameters\n        ----------\n        n_suggestions : int\n            Desired number of parallel suggestions in the output\n\n        Returns\n        -------\n        next_guess : list of dict\n            List of `n_suggestions` suggestions to evaluate the objective\n            function. Each suggestion is a dictionary where each key\n            corresponds to a parameter being optimized.\n        \"\"\"\n        pass\n\n    @abc.abstractmethod\n    def observe(self, input_vectors: Union[List[Dict], Dict], output_value: Union[List[Dict], Dict]) -> None:\n        \"\"\"Send an observation of a suggestion back to the optimizer.\n\n        Parameters\n        ----------\n        X : list of dict-like\n            Places where the objective function has already been evaluated.\n            Each suggestion is a dictionary where each key corresponds to a\n            parameter being optimized.\n        y : array-like, shape (n,)\n            Corresponding values where objective has been evaluated\n        \"\"\"\n        pass\n"
  },
  {
    "path": "transopt/optimizer/optimizer_base/bo.py",
    "content": "import abc\nimport copy\nimport math\nfrom typing import Dict, List, Union\n\nimport GPyOpt\nimport numpy as np\n\nfrom transopt.optimizer.acquisition_function.sequential import Sequential\nfrom transopt.optimizer.optimizer_base.base import OptimizerBase\nfrom transopt.space.fidelity_space import FidelitySpace\nfrom transopt.space.search_space import SearchSpace\nfrom transopt.utils.serialization import (multioutput_to_ndarray,\n                                          output_to_ndarray)\n\n\nclass BO(OptimizerBase):\n    \"\"\"\n    The abstract Model for Bayesian Optimization\n    \"\"\"\n\n    def __init__(self, Refiner, Sampler, ACF, Pretrain, Model, Normalizer, config):\n        super(BO, self).__init__(config=config)\n        self._X = np.empty((0,))  # Initializes an empty ndarray for input vectors\n        self._Y = np.empty((0,))\n        self.config = config\n        self.search_space = None\n        self.ini_num = 10\n        \n        self.SpaceRefiner = Refiner\n        self.Sampler = Sampler\n        self.ACF = ACF\n        self.Pretrain = Pretrain\n        self.Model = Model\n        self.Normalizer = Normalizer\n\n        \n        self.ACF.link_model(model=self.Model)\n        \n        self.MetaData = None\n    \n    def link_task(self, task_name:str, search_space: SearchSpace):\n        self.task_name = task_name\n        self.search_space = search_space\n        self._X = np.empty((0,))  # Initializes an empty ndarray for input vectors\n        self._Y = np.empty((0,))\n        self.ACF.link_space(self.search_space)\n        self.evaluator = Sequential(self.ACF, batch_size=1)\n            \n    \n    def search_space_refine(self, metadata = None, metadata_info = None):\n        if self.SpaceRefiner is not None:\n            self.search_space = self.SpaceRefiner.refine_space(self.search_space)\n            self.ACF.link_space(self.search_space)\n            self.evaluator = Sequential(self.ACF)\n            \n    def sample_initial_set(self, metadata = None, metadata_info = None):\n        return self.Sampler.sample(self.search_space, self.ini_num)\n    \n    def pretrain(self, metadata = None, metadata_info = None):\n        if self.Pretrain:\n            self.Pretrain.set_data(metadata, metadata_info)\n            self.Pretrain.meta_train()\n    \n    \n    def meta_fit(self, metadata = None, metadata_info = None):\n        if metadata:\n            source_X = []\n            source_Y = []\n            for key, datasets in metadata.items():\n                data_info = metadata_info[key]\n                source_X.append(np.array([[data[var['name']] for var in data_info['variables']] for data in datasets]))\n                source_Y.append(np.array([[data[var['name']] for var in data_info['objectives']] for data in datasets]))\n                \n            self.Model.meta_fit(source_X, source_Y)\n    \n    def fit(self):\n\n        Y = copy.deepcopy(self._Y)\n            \n        X = copy.deepcopy(self._X)\n        \n        self.Model.fit(X, Y, optimize = True)\n            \n    def suggest(self):\n        suggested_sample, acq_value = self.evaluator.compute_batch(None, context_manager=None)\n        # suggested_sample = self.search_space.zip_inputs(suggested_sample)\n\n        if self.Normalizer:\n            suggested_sample = self.Normalizer.inverse_transform(X=suggested_sample)[0]\n        \n        return suggested_sample\n\n        \n    def observe(self, X: np.ndarray, Y: List[Dict]) -> None:\n        # Check if the lists are empty and return if they are\n        if X.shape[0] == 0 or len(Y) == 0:\n            return\n\n        Y = np.array(output_to_ndarray(Y))\n        if self.Normalizer:\n            self.Normalizer.fit(X, Y)\n            X, Y = self.Normalizer.transform(X, Y)\n        \n        self._X = np.vstack((self._X, X)) if self._X.size else X\n        self._Y = np.vstack((self._Y, Y)) if self._Y.size else Y\n\n\n"
  },
  {
    "path": "transopt/optimizer/pretrain/__init__.py",
    "content": "from transopt.optimizer.pretrain.deepkernelpretrain import DeepKernelPretrain\nfrom transopt.optimizer.pretrain.hyper_bo import HyperBOPretrain"
  },
  {
    "path": "transopt/optimizer/pretrain/deepkernelpretrain.py",
    "content": "import copy\nimport os\n\nimport gpytorch\nimport numpy as np\nimport torch\nimport torch.nn as nn\nfrom sklearn.preprocessing import MinMaxScaler\n\nfrom transopt.agent.registry import pretrain_registry\nfrom transopt.optimizer.pretrain.pretrain_base import PretrainBase\n\nnp.random.seed(1203)\nRandomQueryGenerator= np.random.RandomState(413)\nRandomSupportGenerator= np.random.RandomState(413)\nRandomTaskGenerator = np.random.RandomState(413)\n\n\n\nclass Metric(object):\n    def __init__(self,prefix='train: '):\n        self.reset()\n        self.message=prefix + \"loss: {loss:.2f} - noise: {log_var:.2f} - mse: {mse:.2f}\"\n        \n    def update(self,loss,noise,mse):\n        self.loss.append(loss.item())\n        self.noise.append(noise.item())\n        self.mse.append(mse.item())\n    \n    def reset(self,):\n        self.loss = []\n        self.noise = []\n        self.mse = []\n    \n    def report(self):\n        return self.message.format(loss=np.mean(self.loss),\n                            log_var=np.mean(self.noise),\n                            mse=np.mean(self.mse))\n    \n    def get(self):\n        return {\"loss\":np.mean(self.loss),\n                \"noise\":np.mean(self.noise),\n                \"mse\":np.mean(self.mse)}\n    \n\ndef totorch(x,device):\n\n    return torch.Tensor(x).to(device)    \n\nclass MLP(nn.Module):\n    def __init__(self, input_size, hidden_size=[32,32,32,32], dropout=0.0):\n        \n        super(MLP, self).__init__()\n        self.nonlinearity = nn.ReLU()\n        self.fc = nn.ModuleList([nn.Linear(in_features=input_size, out_features=hidden_size[0])])\n        for d_out in hidden_size[1:]:\n            self.fc.append(nn.Linear(in_features=self.fc[-1].out_features, out_features=d_out))\n        self.out_features = hidden_size[-1]\n        self.dropout = nn.Dropout(dropout)\n    def forward(self,x):\n        \n        for fc in self.fc[:-1]:\n            x = fc(x)\n            x = self.dropout(x)\n            x = self.nonlinearity(x)\n        x = self.fc[-1](x)\n        x = self.dropout(x)\n        return x\n\nclass ExactGPLayer(gpytorch.models.ExactGP):\n    def __init__(self, train_x, train_y, likelihood,config,dims ):\n        super(ExactGPLayer, self).__init__(train_x, train_y, likelihood)\n        self.mean_module  = gpytorch.means.ConstantMean()\n\n        if(config[\"kernel\"]=='rbf' or config[\"kernel\"]=='RBF'):\n            self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel(ard_num_dims=dims if config[\"ard\"] else None))\n        elif(config[\"kernel\"]=='matern'):\n            self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.MaternKernel(nu=config[\"nu\"],ard_num_dims=dims if config[\"ard\"] else None))\n        else:\n            raise ValueError(\"[ERROR] the kernel '\" + str(config[\"kernel\"]) + \"' is not supported for regression, use 'rbf' or 'spectral'.\")\n            \n    def forward(self, x):\n        mean_x  = self.mean_module(x)\n        covar_x = self.covar_module(x)\n        return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)\n    \n    \n\n@pretrain_registry.register(\"DeepKernelPretrain\")\nclass DeepKernelPretrain(nn.Module):\n    def __init__(self, config = {}):\n        super(DeepKernelPretrain, self).__init__()\n        ## GP parameters\n        if len(config) == 0:\n            self.config = {\"kernel\": \"matern\", 'ard': False, \"nu\": 2.5, 'hidden_size': [32,32,32,32],\n                           'n_inner_steps': 1, 'test_batch_size':1, 'batch_size':1, 'seed':0, 'checkpoint_path':'./external/model/FSBO/'}\n        else:\n            self.config = config\n            \n        self.batch_size = self.config['batch_size']\n        self.test_batch_size = self.config['test_batch_size']\n        self.n_inner_steps = self.config['n_inner_steps']\n        self.checkpoint_path = self.config['checkpoint_path']\n\n        self.device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n        self.hidden_size = [32,32,32,32]\n        self.kernel_config = {\"kernel\": self.config['kernel'], 'ard': self.config['ard'], \"nu\": self.config['nu']}\n        self.Seed = self.config['seed']\n\n        self.train_metrics = Metric()\n        self.valid_metrics = Metric(prefix=\"valid: \")\n        self.mse        = nn.MSELoss()\n        self.curr_valid_loss = np.inf\n        os.makedirs(self.checkpoint_path,exist_ok=True)\n\n        print(self)\n\n    def set_data(self, metadata, metadata_info= None):\n        \n        train_data = {}\n        for dataset_name, data in metadata.items():\n            objectives = metadata_info[dataset_name][\"objectives\"]\n            obj = objectives[0][\"name\"]\n\n            obj_data = [d[obj] for d in data]\n            var_data = [[d[var[\"name\"]] for var in metadata_info[dataset_name][\"variables\"]] for d in data]\n            self.input_size = metadata_info[dataset_name]['num_variables']\n            train_data[dataset_name] = {'X':np.array(var_data), 'y':np.array(obj_data)[:, np.newaxis]}\n            \n        self.train_data = train_data\n        self.feature_extractor =  MLP(self.input_size, hidden_size = self.hidden_size).to(self.device)\n        self.get_tasks()\n\n    def get_tasks(self,):\n        self.tasks = list(self.train_data.keys())\n\n\n    def get_model_likelihood_mll(self, train_size):\n        \n        train_x=torch.ones(train_size, self.feature_extractor.out_features).to(self.device)\n        train_y=torch.ones(train_size).to(self.device)\n\n        likelihood = gpytorch.likelihoods.GaussianLikelihood()\n        model = ExactGPLayer(train_x = train_x, train_y = train_y, likelihood = likelihood, config = self.kernel_config, dims = self.feature_extractor.out_features)\n        self.model = model.to(self.device)\n        self.likelihood = likelihood.to(self.device)\n        self.mll        = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model).to(self.device)\n\n\n    def epoch_end(self):\n        RandomTaskGenerator.shuffle(self.tasks)\n\n\n    def meta_train(self, epochs = 50000, lr = 0.0001):\n        self.get_model_likelihood_mll(self.batch_size)\n        \n        optimizer = torch.optim.Adam(self.parameters(), lr=lr)\n        scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, epochs, eta_min=1e-7)\n        \n\n        for epoch in range(epochs):\n            self.train_loop(epoch, optimizer, scheduler)\n        self.save_checkpoint(self.checkpoint_path + f'Seed_{self.Seed}_{len(self.tasks)}')\n    def train_loop(self, epoch, optimizer, scheduler=None):\n        self.epoch_end()\n        assert(self.training)\n        for task in self.tasks:\n            inputs, labels = self.get_batch(task)\n            for _ in range(self.n_inner_steps):\n                optimizer.zero_grad()\n                z = self.feature_extractor(inputs)\n                self.model.set_train_data(inputs=z, targets=labels, strict=False)\n                predictions = self.model(z)\n                loss = -self.mll(predictions, self.model.train_targets)\n                loss.backward()\n                optimizer.step()\n                mse = self.mse(predictions.mean, labels)\n                self.train_metrics.update(loss,self.model.likelihood.noise,mse)\n        if scheduler:\n            scheduler.step()\n        \n        training_results = self.train_metrics.get()\n            \n        validation_results = self.valid_metrics.get()\n        # for k,v in validation_results.items():\n        #     self.valid_summary_writer.add_scalar(k, v, epoch)\n        self.feature_extractor.train()\n        self.likelihood.train()\n        self.model.train()\n        \n        if validation_results[\"loss\"] < self.curr_valid_loss:\n            self.save_checkpoint(os.path.join(self.checkpoint_path,\"weights\"))\n            self.curr_valid_loss = validation_results[\"loss\"]\n        self.valid_metrics.reset()       \n        self.train_metrics.reset()\n            \n    def test_loop(self, task, train): \n        (x_support, y_support),(x_query,y_query) = self.get_support_and_queries(task,train)\n        z_support = self.feature_extractor(x_support).detach()\n        self.model.set_train_data(inputs=z_support, targets=y_support, strict=False)\n        self.model.eval()        \n        self.feature_extractor.eval()\n        self.likelihood.eval()\n\n        with torch.no_grad():\n            z_query = self.feature_extractor(x_query).detach()\n            pred    = self.likelihood(self.model(z_query))\n            loss = -self.mll(pred, y_query)\n            lower, upper = pred.confidence_region() #2 standard deviations above and below the mean\n\n        mse = self.mse(pred.mean, y_query)\n\n        return mse,loss\n\n    def get_batch(self,task):\n\n        Lambda,response =     np.array(self.train_data[task][\"X\"]), MinMaxScaler().fit_transform(np.array(self.train_data[task][\"y\"])).reshape(-1,)\n\n        card, dim = Lambda.shape\n        \n        support_ids = RandomSupportGenerator.choice(np.arange(card),\n                                              replace=False,size= min(self.batch_size, card))\n\n        \n        inputs,labels = Lambda[support_ids], response[support_ids]\n        inputs,labels = totorch(inputs,device=self.device), totorch(labels.reshape(-1,),device=self.device)\n        return inputs, labels\n        \n    def get_support_and_queries(self,task, train=False):\n        \n\n        hpo_data = self.valid_data if not train else self.train_data\n        Lambda,response =     np.array(hpo_data[task][\"X\"]), MinMaxScaler().fit_transform(np.array(hpo_data[task][\"y\"])).reshape(-1,)\n        card, dim = Lambda.shape\n\n        support_ids = RandomSupportGenerator.choice(np.arange(card),\n                                              replace=False,size=min(self.batch_size, card))\n        diff_set = np.setdiff1d(np.arange(card),support_ids)\n        query_ids = RandomQueryGenerator.choice(diff_set,replace=False,size=min(self.batch_size, len(diff_set)))\n        \n        support_x,support_y = Lambda[support_ids], response[support_ids]\n        query_x,query_y = Lambda[query_ids], response[query_ids]\n        \n        return (totorch(support_x,self.device),totorch(support_y.reshape(-1,),self.device)),\\\n    (totorch(query_x,self.device),totorch(query_y.reshape(-1,),self.device))\n        \n    def save_checkpoint(self, checkpoint):\n\n        gp_state_dict         = self.model.state_dict()\n        likelihood_state_dict = self.likelihood.state_dict()\n        nn_state_dict         = self.feature_extractor.state_dict()\n        torch.save({'gp': gp_state_dict, 'likelihood': likelihood_state_dict, 'net':nn_state_dict}, checkpoint)\n\n    def load_checkpoint(self, checkpoint):\n        ckpt = torch.load(checkpoint)\n        self.model.load_state_dict(ckpt['gp'])\n        self.likelihood.load_state_dict(ckpt['likelihood'])\n        self.feature_extractor.load_state_dict(ckpt['net'])\n        \n"
  },
  {
    "path": "transopt/optimizer/pretrain/get_pretrain.py",
    "content": "from transopt.agent.registry import pretrain_registry\n\n\n\ndef get_pretrain(pretrain_name, **kwargs):\n    \"\"\"Create the optimizer object.\"\"\"\n    pretrain_class = pretrain_registry.get(pretrain_name)\n    config = kwargs\n\n    if pretrain_class is not None:\n        pretrain_method = pretrain_class(config=config)\n    else:\n        print(f\"Refiner '{pretrain_name}' not found in the registry.\")\n        raise NameError\n    return pretrain_method"
  },
  {
    "path": "transopt/optimizer/pretrain/hyper_bo.py",
    "content": "from transopt.agent.registry import pretrain_registry\nfrom transopt.optimizer.pretrain.pretrain_base import PretrainBase\n\n\n\n@pretrain_registry.register(\"hyperbo\")\nclass HyperBOPretrain(PretrainBase):\n    def __init__(self, config) -> None:\n        super().__init__(config)"
  },
  {
    "path": "transopt/optimizer/pretrain/pretrain_base.py",
    "content": "\n\nclass PretrainBase:\n    def __init__(self) -> None:\n        pass"
  },
  {
    "path": "transopt/optimizer/refiner/__init__.py",
    "content": "from transopt.optimizer.refiner.box import BoxRefiner\nfrom transopt.optimizer.refiner.ellipse import EllipseRefiner\nfrom transopt.optimizer.refiner.prune import Prune"
  },
  {
    "path": "transopt/optimizer/refiner/box.py",
    "content": "from transopt.optimizer.refiner.refiner_base import RefinerBase\nfrom transopt.agent.registry import space_refiner_registry\n\n@space_refiner_registry.register(\"box\")\nclass BoxRefiner(RefinerBase):\n    def __init__(self, config) -> None:\n        super().__init__(config)\n        "
  },
  {
    "path": "transopt/optimizer/refiner/ellipse.py",
    "content": "from transopt.optimizer.refiner.refiner_base import RefinerBase\nfrom transopt.agent.registry import space_refiner_registry\n\n@space_refiner_registry.register(\"ellipse\")\nclass EllipseRefiner(RefinerBase):\n    def __init__(self, config) -> None:\n        super().__init__(config)"
  },
  {
    "path": "transopt/optimizer/refiner/get_refiner.py",
    "content": "from transopt.agent.registry import space_refiner_registry\n\n\n\ndef get_refiner(refiner_name, **kwargs):\n    \"\"\"Create the optimizer object.\"\"\"\n    refiner_class = space_refiner_registry.get(refiner_name)\n    config = kwargs\n\n    if refiner_class is not None:\n        refiner = refiner_class(config=config)\n    else:\n        print(f\"Refiner '{refiner_name}' not found in the registry.\")\n        raise NameError\n    return refiner"
  },
  {
    "path": "transopt/optimizer/refiner/prune.py",
    "content": "\nfrom transopt.optimizer.refiner.refiner_base import RefinerBase\nfrom transopt.agent.registry import space_refiner_registry\n\n@space_refiner_registry.register(\"Prune\")\nclass Prune(RefinerBase):\n    def __init__(self, config) -> None:\n        super().__init__(config)\n            \n    def refine(self, search_space, metadata=None):\n        \n        raise NotImplementedError(\"Sample method should be implemented by subclasses.\")\n    \n    def check_metadata_avaliable(self, metadata):\n        if metadata is None:\n            return False\n        return True "
  },
  {
    "path": "transopt/optimizer/refiner/refiner_base.py",
    "content": "\n\n\nclass RefinerBase:\n    def __init__(self, config) -> None:\n        self.config = config\n        \n    def refine(self, search_space, metadata=None):\n        \n        raise NotImplementedError(\"Sample method should be implemented by subclasses.\")\n    \n    def check_metadata_avaliable(self, metadata):\n        if metadata is None:\n            return False\n        return True"
  },
  {
    "path": "transopt/optimizer/sampler/__init__.py",
    "content": "from transopt.optimizer.sampler.random import RandomSampler\nfrom transopt.optimizer.sampler.sobel import SobolSampler\nfrom transopt.optimizer.sampler.lhs import LatinHypercubeSampler"
  },
  {
    "path": "transopt/optimizer/sampler/get_sampler.py",
    "content": "from transopt.agent.registry import sampler_registry\n\n\n\ndef get_sampler(sampler_name, **kwargs):\n    \"\"\"Create the optimizer object.\"\"\"\n    sampler_class = sampler_registry.get(sampler_name)\n\n    if sampler_class is not None:\n        sampler = sampler_class(config=kwargs)\n    else:\n        print(f\"Sampler '{sampler_name}' not found in the registry.\")\n        raise NameError\n    return sampler"
  },
  {
    "path": "transopt/optimizer/sampler/gradient.py",
    "content": ""
  },
  {
    "path": "transopt/optimizer/sampler/grid.py",
    "content": "import numpy as np\nfrom sampler.sampler_base import Sampler\nfrom agent.registry import sampler_registry\n\n# @sampler_registry.register(\"grid\")\nclass GridSampler(Sampler):\n    def generate_grid_for_variable(self, var_range, is_discrete, steps):\n        if is_discrete:\n            if (var_range[1] - var_range[0] + 1) <= steps:\n                return np.arange(var_range[0], var_range[1] + 1)\n            else:\n                return np.linspace(\n                    var_range[0], var_range[1], num=steps, endpoint=True\n                ).round()\n        else:\n            return np.linspace(var_range[0], var_range[1], num=steps)\n\n    def sample(self, search_space, steps=5, metadata=None):\n        grids = []\n        for name in search_space.variables_order:\n            var_range = search_space.ranges[name]\n            is_discrete = search_space.var_discrete[name]\n            grid = self.generate_grid_for_variable(var_range, is_discrete, steps)\n            grids.append(grid)\n\n        mesh = np.meshgrid(*grids, indexing=\"ij\")\n        sample_points = np.stack(mesh, axis=-1).reshape(\n            -1, len(search_space.variables_order)\n        )\n        return sample_points\n"
  },
  {
    "path": "transopt/optimizer/sampler/lhs.py",
    "content": "import numpy as np\nfrom scipy.stats import qmc\n\nfrom transopt.optimizer.sampler.sampler_base import Sampler\nfrom transopt.agent.registry import sampler_registry\nfrom transopt.space.search_space import SearchSpace\n\n\n@sampler_registry.register(\"lhs\")\nclass LatinHypercubeSampler(Sampler):\n    def sample(self, search_space:SearchSpace, metadata = None):\n        d = len(search_space.variables_order)\n        sampler = qmc.LatinHypercube(d=d)\n        sample_points = sampler.random(n=self.n_samples)\n        for i, name in enumerate(search_space.variables_order):\n            var_range = search_space.ranges[name]\n            if search_space.var_discrete[name]: \n                continuous_vals = qmc.scale(\n                    sample_points[:, i][np.newaxis], var_range[0], var_range[1]\n                )\n                sample_points[:, i] = np.round(continuous_vals).astype(int)\n            else:  # 连续变量处理\n                sample_points[:, i] = qmc.scale(sample_points[:, i][np.newaxis], var_range[0], var_range[1])\n        return sample_points\n"
  },
  {
    "path": "transopt/optimizer/sampler/lhs_BAK.py",
    "content": "import numpy as np\nimport scipy.stats.qmc as qmc\nfrom scipy import spatial\nfrom scipy import stats\nfrom scipy import linalg\nfrom numpy import ma\n\n__all__ = [\"lhs\"]\n\n\ndef lhs(d, samples=None, criterion=None, iterations=5, correlation_matrix=None):\n    \"\"\"\n    Generate a latin-hypercube design\n    Parameters\n    ----------\n    d : int\n        The number of factors to generate samples for\n    Optional\n    --------\n    samples : int\n        The number of samples to generate for each factor (Default: d)\n    criterion : str\n        Allowable values are \"center\" or \"c\", \"maximin\" or \"m\",\n        \"centermaximin\" or \"cm\", and \"correlation\" or \"corr\". If no value\n        given, the design is simply randomized.\n    iterations : int\n        The number of iterations in the maximin and correlations algorithms\n        (Default: 5).\n    correlation_matrix : ndarray\n         Enforce correlation between factors (only used in lhsmu)\n    Returns\n    -------\n    H : 2d-array\n        An n-by-samples design matrix that has been normalized so factor values\n        are uniformly spaced between zero and one.\n    \"\"\"\n    H = None\n    if samples is None:\n        samples = d\n\n    if criterion is None:\n        return _lhsclassic(d, samples)\n\n    criterion = criterion.lower()\n    if not criterion in (\"center\", \"c\", \"maximin\", \"m\", \"centermaximin\", \"cm\", \"correlation\", \"corr\", \"lhsmu\"):\n        raise ValueError('Invalid value for \"criterion\": {}'.format(criterion))\n\n    if criterion in (\"center\", \"c\"):\n        H = _lhscentered(d, samples)\n    elif criterion in (\"maximin\", \"m\"):\n        H = _lhsmaximin(d, samples, iterations, \"maximin\")\n    elif criterion in (\"centermaximin\", \"cm\"):\n        H = _lhsmaximin(d, samples, iterations, \"centermaximin\")\n    elif criterion in (\"correlation\", \"corr\"):\n        H = _lhscorrelate(d, samples, iterations)\n    elif criterion in (\"lhsmu\"):\n        # as specified by the paper. M is set to 5\n        H = _lhsmu(d, samples, correlation_matrix, M=5)\n\n    return H\n\n\ndef _lhsclassic(d, samples):\n    sampler = qmc.LatinHypercube(d=d)\n    return sampler.random(n=samples)\n\n\n\ndef _lhscentered(d, samples):\n    sampler = qmc.LatinHypercube(d=d)\n    H = sampler.random(n=samples)\n    H = (np.floor(H * samples) + 0.5) / samples\n    return H\n\n\ndef _lhsmaximin(d, samples, iterations, lhstype):\n    maxdist = 0\n    best_sample = None\n\n    for i in range(iterations):\n        sampler = qmc.LatinHypercube(d=d)\n        Hcandidate = sampler.random(n=samples)\n\n        if lhstype != \"maximin\":\n            Hcandidate = (np.floor(Hcandidate * samples) + 0.5) / samples\n\n        d = spatial.distance.pdist(Hcandidate, \"euclidean\")\n        min_d = np.min(d)\n        if maxdist < min_d:\n            maxdist = min_d\n            best_sample = Hcandidate.copy()\n\n    return best_sample if best_sample is not None else np.zeros((samples, d))\n\n\ndef _lhscorrelate(d, samples, iterations):\n    mincorr = np.inf\n    best_sample = None\n\n    for i in range(iterations):\n        sampler = qmc.LatinHypercube(d=d)\n        Hcandidate = sampler.random(n=samples)\n\n        R = np.corrcoef(Hcandidate.T)\n        max_corr = np.max(np.abs(R - np.eye(d)))\n\n        if max_corr < mincorr:\n            mincorr = max_corr\n            best_sample = Hcandidate.copy()\n\n    return best_sample\n\n\ndef _lhsmu(d, samples=None, corr=None, M=5):\n    if samples is None:\n        samples = d\n\n    I = M * samples\n\n    rdpoints = np.random.uniform(size=(I, d))\n\n    dist = spatial.distance.cdist(rdpoints, rdpoints, metric=\"euclidean\")\n    D_ij = ma.masked_array(dist, mask=np.identity(I))\n\n    index_rm = np.zeros(I - samples, dtype=int)\n    i = 0\n    while i < I - samples:\n        order = ma.sort(D_ij, axis=1)\n        avg_dist = ma.mean(order[:, 0:2], axis=1)\n        min_l = ma.argmin(avg_dist)\n\n        D_ij[min_l, :] = ma.masked\n        D_ij[:, min_l] = ma.masked\n\n        index_rm[i] = min_l\n        i += 1\n\n    rdpoints = np.delete(rdpoints, index_rm, axis=0)\n\n    if corr is not None:\n        # check if covariance matrix is valid\n        assert type(corr) == np.ndarray\n        assert corr.ndim == 2\n        assert corr.shape[0] == corr.shape[1]\n        assert corr.shape[0] == d\n\n        norm_u = stats.norm().ppf(rdpoints)\n        L = linalg.cholesky(corr, lower=True)\n\n        norm_u = np.matmul(norm_u, L)\n\n        H = stats.norm().cdf(norm_u)\n    else:\n        H = np.zeros_like(rdpoints, dtype=float)\n        rank = np.argsort(rdpoints, axis=0)\n\n        for l in range(samples):\n            low = float(l) / samples\n            high = float(l + 1) / samples\n\n            l_pos = rank == l\n            H[l_pos] = np.random.uniform(low, high, size=d)\n    return H\n\n\nif __name__ == \"__main__\":\n    \"\"\"\n    Example\n    -------\n    A 3-factor design (defaults to 3 samples)::\n        >>> lhs(3, random_state=42)\n        array([[ 0.12484671,  0.95539205,  0.24399798],\n               [ 0.53288616,  0.38533955,  0.86703834],\n               [ 0.68602787,  0.31690477,  0.38533151]])\n    A 4-factor design with 6 samples::\n        >>> lhs(4, samples=6)\n        array([[ 0.06242335,  0.19266575,  0.88202411,  0.89439364],\n               [ 0.19266977,  0.53538985,  0.53030416,  0.49498498],\n               [ 0.71737371,  0.75412607,  0.17634727,  0.71520486],\n               [ 0.63874044,  0.85658231,  0.33676408,  0.31102936],\n               [ 0.43351917,  0.45134543,  0.12199899,  0.53056742],\n               [ 0.93530882,  0.15845238,  0.7386575 ,  0.09977641]])\n    A 2-factor design with 5 centered samples::\n        >>> lhs(2, samples=5, criterion='center', random_state=42)\n        array([[ 0.1,  0.9],\n               [ 0.5,  0.5],\n               [ 0.7,  0.1],\n               [ 0.3,  0.7],\n               [ 0.9,  0.3]])\n    A 3-factor design with 4 samples where the minimum distance between\n    all samples has been maximized::\n        >>> lhs(3, samples=4, criterion='maximin')\n        array([[ 0.69754389,  0.2997106 ,  0.96250964],\n               [ 0.10585037,  0.09872038,  0.73157522],\n               [ 0.25351996,  0.65148999,  0.07337204],\n               [ 0.91276926,  0.97873992,  0.42783549]])\n    A 4-factor design with 5 samples where the samples are as uncorrelated\n    as possible (within 10 iterations)::\n        >>> lhs(4, samples=5, criterion='correlation', iterations=10)\n        array([[ 0.72088348,  0.05121366,  0.97609357,  0.92487081],\n               [ 0.49507404,  0.51265511,  0.00808672,  0.37915272],\n               [ 0.22217816,  0.2878673 ,  0.24034384,  0.42786629],\n               [ 0.91977309,  0.93895699,  0.64061224,  0.14213258],\n               [ 0.04719698,  0.70796822,  0.53910322,  0.78857071]])\n    \"\"\"\n\n    h1 = lhs(4, samples=5)\n    print(h1)\n\n    sampler = qmc.LatinHypercube(d=4)\n    h2 = sampler.random(n=5)\n    print(h2)\n    \n    d = 3\n    samples = 10\n    corr = np.array([[1.0, 0.5, 0.2],\n                    [0.5, 1.0, 0.3],\n                    [0.2, 0.3, 1.0]])\n    sampled_data = _lhsmu(d, samples, corr, M=5)\n    print(\"Generated samples with specified correlation:\")\n    print(sampled_data)"
  },
  {
    "path": "transopt/optimizer/sampler/meta.py",
    "content": ""
  },
  {
    "path": "transopt/optimizer/sampler/random.py",
    "content": "import numpy as np\n\nfrom transopt.optimizer.sampler.sampler_base import Sampler\nfrom transopt.agent.registry import sampler_registry\n\n@sampler_registry.register(\"random\")\nclass RandomSampler(Sampler):\n    def sample(self, search_space, metadata = None):\n        samples = np.zeros((self.n_samples, len(search_space.variables_order)))\n        for i, name in enumerate(search_space.variables_order):\n            var_range = search_space.ranges[name]\n            if search_space.var_discrete[name]:  # 判断是否为离散变量\n                samples[:, i] = np.random.randint(\n                    var_range[0], var_range[1] + 1, size=self.n_samples\n                )\n            else:\n                samples[:, i] = np.random.uniform(\n                    var_range[0], var_range[1], size=self.n_samples\n                )\n        return samples\n"
  },
  {
    "path": "transopt/optimizer/sampler/sampler_base.py",
    "content": "\nclass Sampler:\n    def __init__(self, n_samples, config) -> None:\n        self.config = config\n        self.n_samples = n_samples\n        \n    def sample(self, search_space, metadata=None):\n        raise NotImplementedError(\"Sample method should be implemented by subclasses.\")\n    \n    def change_n_samples(self, n_samples):\n        self.n_samples = n_samples\n    \n    def check_metadata_avaliable(self, metadata):\n        if metadata is None:\n            return False\n        return True"
  },
  {
    "path": "transopt/optimizer/sampler/sobel.py",
    "content": "import numpy as np\nfrom scipy.stats import qmc\n\nfrom transopt.optimizer.sampler.sampler_base import Sampler\nfrom transopt.agent.registry import sampler_registry\n\n@sampler_registry.register(\"sobol\")\nclass SobolSampler(Sampler):\n    def sample(self, search_space, metadata = None):\n        d = len(search_space.variables_order)\n        sampler = qmc.Sobol(d=d, scramble=True)\n        sample_points = sampler.random(n=self.n_samples)\n        for i, name in enumerate(search_space.variables_order):\n            var_range = search_space.ranges[name]\n            if search_space.var_discrete[name]:\n                # 对离散变量进行处理\n                continuous_vals = qmc.scale(\n                    sample_points[:, i], var_range[0], var_range[1]\n                )\n                sample_points[:, i] = np.round(continuous_vals).astype(int)\n            else:\n                sample_points[:, i] = qmc.scale(\n                    sample_points[:, i], var_range[0], var_range[1]\n                )\n        return sample_points\n"
  },
  {
    "path": "transopt/optimizer/selector/__init__.py",
    "content": "from transopt.optimizer.selector.selector_base import SelectorBase\nfrom transopt.optimizer.selector.lsh_selector import LSHSelector\nfrom transopt.optimizer.selector.fuzzy_selector import FuzzySelector"
  },
  {
    "path": "transopt/optimizer/selector/fuzzy_selector.py",
    "content": "from transopt.agent.registry import selector_registry\nfrom transopt.optimizer.selector.selector_base import SelectorBase\n\n\n@selector_registry.register(\"Fuzzy\")\nclass FuzzySelector(SelectorBase):\n    def __init__(self, config):\n        super(FuzzySelector, self).__init__(config)\n\n    def fetch_data(self, tasks_info):\n        task_name = tasks_info[\"additional_config\"][\"problem_name\"]        \n        variable_names = [var['name'] for var in tasks_info[\"variables\"]] \n        dimensions = len(variable_names)\n        objectives = len(tasks_info[\"objectives\"])\n        \n        conditions = {\n            \"task_name\": task_name,\n            \"dimensions\": dimensions,\n            \"objectives\": objectives,\n        }\n\n        datasets_list = self.data_manager.db.search_tables_by_metadata(conditions)\n        metadata = {\n            dataset_name: self.data_manager.db.select_data(dataset_name)\n            for dataset_name in datasets_list\n        }\n        metadata_info = {\n            dataset_name: self.data_manager.db.query_dataset_info(dataset_name)\n            for dataset_name in datasets_list\n        }\n\n        return metadata, metadata_info\n"
  },
  {
    "path": "transopt/optimizer/selector/lsh_selector.py",
    "content": "from transopt.optimizer.selector.selector_base import SelectorBase \nfrom transopt.agent.registry import selector_registry\n\n@selector_registry.register('LSH')\nclass LSHSelector(SelectorBase):\n    def __init__(self, config):\n        \n        super(LSHSelector, self).__init__(config)\n        \n    def fetch_data(self, tasks_info):\n        task_name = tasks_info['additional_config']['problem_name']\n        variable_names = [var['name'] for var in tasks_info[\"variables\"]] \n        num_variables = len(variable_names)\n        num_objectives = len(tasks_info[\"objectives\"])\n        name_str = \" \".join(variable_names)\n        datasets_list = self.data_manager.search_similar_datasets(task_name, {'variable_names':name_str, 'num_variables':num_variables, 'num_objectives':num_objectives})\n        metadata = {}\n        metadata_info = {}\n        for dataset_name in datasets_list:\n                metadata[dataset_name] = self.data_manager.db.select_data(dataset_name)\n                metadata_info[dataset_name] = self.data_manager.db.query_dataset_info(dataset_name)\n        return metadata, metadata_info\n"
  },
  {
    "path": "transopt/optimizer/selector/selector_base.py",
    "content": "\n\nfrom transopt.datamanager.manager import DataManager\nfrom abc import ABC, abstractmethod\n\nclass SelectorBase:\n    def __init__(self, config):\n        self.data_manager = DataManager()\n        \n\n\n    @abstractmethod\n    def fetch_data(self, tasks_info):\n        raise NotImplementedError\n    \n    "
  },
  {
    "path": "transopt/remote/__init__.py",
    "content": "from transopt.remote.experiment_tasks import celery_inst, ExperimentTaskHandler\nfrom transopt.remote.experiment_server import ExperimentServer\nfrom transopt.remote.experiment_client import ExperimentClient"
  },
  {
    "path": "transopt/remote/celeryconfig.py",
    "content": "## Broker settings.\nbroker_url = 'redis://localhost:6379/0'\nbroker_connection_retry_on_startup = True\n\n## Using the database to store task state and results.\nresult_backend = 'redis://localhost:6379/0'\n\n# If enabled the task will report its status as 'started' \n# when the task is executed by a worker.\ntask_track_started = True"
  },
  {
    "path": "transopt/remote/experiment_client.py",
    "content": "import requests\nimport time\n\n\nclass ExperimentClient:\n    def __init__(self, server_url, timeout=10):\n        self.server_url = server_url\n        self.timeout = timeout\n\n    def _handle_response(self, response):\n        if response.status_code != 200:\n            raise Exception(\n                f\"Server returned status code {response.status_code}: {response.text}\"\n            )\n        return response.json()\n\n    def start_experiment(self, params):\n        try:\n            response = requests.post(\n                f\"{self.server_url}/start_experiment\", json=params, timeout=self.timeout\n            )\n            data = self._handle_response(response)\n            return data.get(\"task_id\")\n        except requests.RequestException as e:\n            raise Exception(f\"Failed to start experiment: {e}\")\n\n    def get_experiment_result(self, task_id):\n        try:\n            response = requests.get(\n                f\"{self.server_url}/get_experiment_result/{task_id}\",\n                timeout=self.timeout,\n            )\n            return self._handle_response(response)\n        except requests.RequestException as e:\n            raise Exception(\n                f\"Failed to get experiment result for task ID {task_id}: {e}\"\n            )\n\n    def wait_for_result(self, task_id, poll_interval=2):\n        while True:\n            result = self.get_experiment_result(task_id)\n            if result[\"state\"] == \"SUCCESS\":\n                return result[\"result\"]\n            elif result[\"state\"] == \"FAILURE\":\n                raise Exception(f\"Experiment failed with status: {result['status']}\")\n            else:\n                print(f\"Experiment state: {result['state']}\")\n            time.sleep(poll_interval)\n\n\nif __name__ == \"__main__\":\n    client = ExperimentClient(server_url=\"http://192.168.3.49:5000\")\n\n    params = {\"param1\": \"value1\", \"param2\": \"value2\"}  # Example parameters\n\n    try:\n        task_id = client.start_experiment(params)\n        print(f\"Experiment started with task ID: {task_id}\")\n\n        result = client.wait_for_result(task_id)\n        print(f\"Experiment result: {result}\")\n    except Exception as e:\n        print(f\"Error: {e}\")"
  },
  {
    "path": "transopt/remote/experiment_server.py",
    "content": "from flask import Flask, jsonify, request\nfrom transopt.remote import celery_inst, ExperimentTaskHandler\n\n\nclass ExperimentServer:\n    def __init__(self, task_handler):\n        self.app = Flask(__name__)\n        self.task_handler = task_handler\n        self._setup_routes()\n\n    def _validate_params(self, params):\n        required_keys = [\"benchmark\", \"id\", \"budget\", \"seed\", \"bench_params\", \"fitness_params\"]\n        return all(key in params for key in required_keys)\n\n    def _setup_routes(self):\n        @self.app.route(\"/start_experiment\", methods=[\"POST\"])\n        def start_experiment():\n            params = request.json\n\n            if not self._validate_params(params):\n                return jsonify({\"error\": \"Invalid parameters\"}), 400\n\n            try:\n                task = self.task_handler.start_experiment(params)\n                return jsonify({\"task_id\": task.id}), 200\n            except Exception as e:\n                # TODO:\n                #   - better error handling\n                return jsonify({\"error\": str(e)}), 500\n\n        \n\n        @self.app.route(\"/get_experiment_result/<task_id>\", methods=[\"GET\"])\n        def get_experiment_result(task_id):\n            task = celery_inst.AsyncResult(task_id)\n            if task.state == \"PENDING\":\n                response = {\n                    \"state\": task.state,\n                    \"status\": \"Task is pending...\",\n                }\n            elif task.state != \"FAILURE\":\n                response = {\n                    \"state\": task.state,\n                    \"result\": task.result,\n                }\n            else:\n                # task failed\n                response = {\n                    \"state\": task.state,\n                    \"status\": str(task.info),  # this is the exception raised\n                }\n            return jsonify(response)\n\n    def run(self, host=\"0.0.0.0\", port=5001):\n        self.app.run(host=host, port=port)\n\n\nif __name__ == \"__main__\":\n    task_handler = ExperimentTaskHandler()\n    server = ExperimentServer(task_handler=task_handler)\n    server.run()"
  },
  {
    "path": "transopt/remote/experiment_tasks.py",
    "content": "from celery import Celery, Task\nfrom celery.utils.log import get_task_logger\nfrom transopt.agent.registry import problem_registry\n\ncelery_inst = Celery(__name__)\ncelery_inst.config_from_object(\"celeryconfig\")\n\nlogger = get_task_logger(__name__)\n\n\nclass DebugTask(Task):\n    def on_failure(self, exc, task_id, args, kwargs, einfo):\n        logger.warning(f\"Task [{task_id}] failed: {exc}\")\n\n    def on_success(self, retval, task_id, args, kwargs):\n        logger.warning(f\"Task [{task_id}] succeeded with result: {retval}\")\n\n    def after_return(self, status, retval, task_id, args, kwargs, einfo):\n        logger.warning(f\"Task [{task_id}] finished with status: {status}\")\n\n\nclass ExperimentTaskHandler:\n    def __init__(self):\n        pass\n\n    @celery_inst.task(bind=True, base=DebugTask)\n    def run_experiment(self, params):\n        # rdb.set_trace()\n        bench_name = params[\"benchmark\"]\n        bench_id = params[\"id\"]\n        budget = params[\"budget\"]\n        seed = params[\"seed\"]\n        bench_params = params[\"bench_params\"]\n        fitness_params = params[\"fitness_params\"]\n\n        benchmark_cls = problem_registry.get(bench_name)\n\n        if benchmark_cls is None:\n            self.update_state(state=\"FAILURE\", meta={\"status\": \"Benchmark not found!\"})\n            raise ValueError(f\"Benchmark {bench_name} not found!\")\n\n        try:\n            problem = benchmark_cls(\n                task_name=f\"{bench_name}_{bench_id}\",\n                task_id=bench_id,\n                budget=budget,\n                seed=seed,\n                params=bench_params,\n            )\n\n            result = problem.f(**fitness_params)\n            return result\n        except Exception as e:\n            self.update_state(state=\"FAILURE\", meta={\"status\": \"Experiment failed!\"})\n            raise e\n\n    def start_experiment(self, params):\n        return self.run_experiment.apply_async(args=[params])\n\n\nif __name__ == \"__main__\":\n    # handler = ExperimentTaskHandler()\n    # params = {\n    #     \"benchmark\": \"sample_bench\",\n    #     \"id\": 1,\n    #     \"budget\": 100,\n    #     \"seed\": 42,\n    #     \"bench_params\": {},\n    #     \"fitness_params\": {}\n    # }\n    \n    # handler.start_experiment(params)\n    pass\n"
  },
  {
    "path": "transopt/remote/server_manager.sh",
    "content": "#!/bin/bash\n\n# Define SESSION_NAME\nSESSION_NAME=\"experiment_server\"\n\nactivate_conda_env() {\n    local env_name=\"$1\"\n    local target_pane=\"$2\"\n    \n    if [ -n \"$env_name\" ]; then\n        tmux send-keys -t \"$target_pane\" \"conda activate $env_name\" C-m\n    fi\n}\n\ndisplay_shortcuts() {\n    local target_pane=\"$1\"\n    \n    # ANSI escape codes for bold and colored text\n    local BOLD=\"\\033[1m\"\n    local COLOR_RED=\"\\033[31m\"\n    local RESET=\"\\033[0m\"\n    \n    # Set display-time to 10 seconds (10000 milliseconds)\n    tmux set-option -t \"$target_pane\" display-time 10000\n    \n    # Display the shortcuts using tmux's display-message\n    tmux display-message -t \"$target_pane\" \"${BOLD}${COLOR_RED}SHORTCUTS: Ctrl-b n (next window), Ctrl-b p (previous window), Ctrl-b d (detach)${RESET}\"\n}\n\nrun_experiment_server() {\n    local env_name=\"$1\"\n    \n    # Start a new tmux session\n    tmux new-session -d -s \"$SESSION_NAME\"\n    \n    # Activate conda environment (if specified) and run the Celery worker in the first window\n    display_shortcuts \"$SESSION_NAME:0\"\n    activate_conda_env \"$env_name\" \"$SESSION_NAME:0\"\n    tmux send-keys -t \"$SESSION_NAME:0\" 'celery -A experiment_tasks.celery worker --loglevel=info' C-m\n    \n    # Run Flask in a new window\n    tmux new-window -t \"$SESSION_NAME:1\"\n    display_shortcuts \"$SESSION_NAME:1\"\n    activate_conda_env \"$env_name\" \"$SESSION_NAME:1\"\n    tmux send-keys -t \"$SESSION_NAME:1\" 'python experiment_server.py' C-m\n    \n    # Attach to the 'experiment_server' session\n    tmux attach -t \"$SESSION_NAME\"\n}\n\ncase \"$1\" in\n    start)\n        # Get the currently activated conda environment\n        CURRENT_CONDA_ENV=$(conda env list | grep '*' | awk '{print $1}')\n        \n        if [ -z \"$CURRENT_CONDA_ENV\" ]; then\n            echo \"No Conda environment is currently activated.\"\n            run_experiment_server \"\"\n        else\n            run_experiment_server \"$CURRENT_CONDA_ENV\"\n        fi\n    ;;\n    \n    attach)\n        tmux attach -t \"$SESSION_NAME\"\n    ;;\n    \n    stop)\n        tmux kill-session -t \"$SESSION_NAME\"\n    ;;\n    \n    *)\n        echo \"Usage: $0 {start|attach|stop}\"\n        exit 1\n    ;;\nesac"
  },
  {
    "path": "transopt/space/__init__.py",
    "content": "from .search_space import SearchSpace\nfrom .variable import Continuous, Categorical, Integer, LogContinuous"
  },
  {
    "path": "transopt/space/fidelity_space.py",
    "content": "import copy\n\nimport numpy as np\nimport pandas as pd\n\n\nclass FidelitySpace:\n    def __init__(self, fidelity_variables):\n        self.ranges = {var.name: var for var in fidelity_variables}\n    \n    @property\n    def fidelity_names(self):\n        return self.ranges.keys()\n    \n    \n    def get_fidelity_range(self):\n        return self.ranges\n    \n    "
  },
  {
    "path": "transopt/space/search_space.py",
    "content": "import copy\n\nimport numpy as np\nimport pandas as pd\n\n\nclass SearchSpace:\n    def __init__(self, variables):\n        self._variables = {var.name: var for var in variables}\n        self.variables_order = [var.name for var in variables]\n\n        # 计算并存储原始范围和类型信息\n        self.original_ranges = {\n            name: var.search_space_range for name, var in self._variables.items()\n        }\n        self.var_discrete = {\n            name: var.is_discrete for name, var in self._variables.items()\n        }\n\n        self.ranges = copy.deepcopy(self.original_ranges)\n    \n    \n    def __getitem__(self, item):\n        return self._variables.get(item)\n\n\n    def __contains__(self, item):\n        return item in self.variables_order\n\n    def get_design_variables(self):\n        return self._variables\n    \n    def get_design_variable(self, name):\n        return self._variables[name]\n\n    def get_hyperparameter_names(self):\n        return list(self._variables.keys())\n    \n    def get_hyperparameter_types(self):\n        return {name:self._variables[name].type for name in self._variables}\n    \n    \n    def map_to_design_space(self, values: np.ndarray) -> dict:\n        \"\"\"\n        Maps the given values from the search space to the design space.\n\n        Args:\n            values (np.ndarray): The values to be mapped from the search space. Must be a 1D NumPy array.\n\n        Returns:\n            dict: A dictionary containing the mapped values in the design space.\n\n        Raises:\n            ValueError: If the `values` parameter is not a 1D NumPy array.\n        \"\"\"\n\n        values_dict = {}\n        for i, name in enumerate(self.variables_order):\n            variable = self._variables[name]\n            value = values[i]\n            values_dict[name] = variable.map2design(value)\n        return values_dict\n    \n    def map_from_design_space(self, values_dict: dict) -> np.ndarray:\n        \"\"\"\n        Maps values from the design space to the search space.\n\n        Args:\n            values_dict (dict): A dictionary containing variable names as keys and their corresponding values.\n\n        Returns:\n            np.ndarray: An array of mapped values in the search space.\n        \"\"\"\n        values_array = np.zeros(len(self.variables_order))\n        for i, name in enumerate(self.variables_order):\n            variable = self._variables[name]\n            value = values_dict[name]\n            values_array[i] = variable.map2search(value)\n        return values_array\n\n    def update_range(self, name, new_range: tuple):\n        \"\"\"\n        Update the range of a variable in the search space.\n\n        Args:\n            name (str): The name of the variable.\n            new_range (tuple): The new range for the variable.\n\n        Raises:\n            ValueError: If the variable is not found in the search space or if the new range is out of the original range.\n        \"\"\"\n        if name in self._variables:\n            # Check if the new range is valid\n            ori_range = self.original_ranges[name]\n            if new_range[0] < ori_range[0] or new_range[1] > ori_range[1]:\n                raise ValueError(\n                    f\"New range {new_range} is out of the original range {ori_range}.\"\n                )\n                \n            self.ranges[name] = new_range\n        else:\n            raise ValueError(f\"Variable '{name}' not found in search space.\")\n\n\n"
  },
  {
    "path": "transopt/space/variable.py",
    "content": "import math\nimport numpy as np\n\nclass Variable:\n    def __init__(self, name, type_):\n        self.name = name\n        self.type = type_\n\n    @property\n    def search_space_range(self):\n        raise NotImplementedError\n\n    def map2design(self, value):\n        # To design space\n        raise NotImplementedError\n\n    def map2search(self, value):\n        # To search space\n        raise NotImplementedError\n\n\nclass Continuous(Variable):\n    def __init__(self, name, range_):\n        super().__init__(name, \"continuous\")\n        self.range = range_\n        \n        self.is_discrete = False\n\n    @property\n    def search_space_range(self):\n        return self.range\n\n    def map2design(self, value):\n        return float(value)  # Ensure it remains a float\n    \n    def map2search(self, value):\n        return value\n\n\nclass Categorical(Variable):\n    def __init__(self, name, categories):\n        super().__init__(name, \"categorical\")\n        self.categories = categories\n        self.range = (1, len(self.categories))\n        \n        self.is_discrete = True\n\n    @property\n    def search_space_range(self):\n        return (1, len(self.categories))\n\n    def map2design(self, value):\n        return self.categories[round(value) - 1]\n\n    def map2search(self, value):\n        return self.categories.index(value) + 1\n    \n\n\nclass Integer(Variable):\n    def __init__(self, name, range_):\n        super().__init__(name, \"integer\")\n        self.range = range_\n\n        self.is_discrete = True\n\n    @property\n    def search_space_range(self):\n        return self.range\n\n    def map2design(self, value):\n        # Ensure the mapped value is an integer\n        return int(round(value)) \n\n    def map2search(self, value):\n        return round(value)\n\nclass LargeInteger(Variable):\n    def __init__(self, name, range_):\n        super().__init__(name, \"large_integer\")\n        self.range = range_\n        self.is_discrete = True\n\n    @property\n    def search_space_range(self):\n        # Convert large range to a manageable float range\n        lower = 0\n        upper = 1\n        return lower, upper\n\n    def map2design(self, value):\n        # Map float value [0, 1] to the large integer range\n        return min(int(self.range[0] + value * (self.range[1] - self.range[0])), self.range[1])\n\n    def map2search(self, value):\n        # Map large integer value to a float value in [0, 1]\n        return (value - self.range[0]) / (self.range[1] - self.range[0])\n\nclass ExponentialInteger(Variable):\n    def __init__(self, name, range_):\n        super().__init__(name, \"exp2\")\n        # Adjust the range to ensure it is in the form of [2^x, 2^y] and satisfies 2^63 - 1\n        lower_bound = 2 ** math.floor(math.log2(range_[0]))\n        upper_bound = min(2 ** math.ceil(math.log2(range_[1])), 2 ** 63)\n        self.range = (lower_bound, upper_bound)\n        self.is_discrete = True\n\n    @property\n    def search_space_range(self):\n        lower = math.log2(self.range[0])\n        upper = math.log2(self.range[1])\n        return lower, upper\n\n    def map2design(self, value):\n        return int(2 ** value)\n\n    def map2search(self, value):\n        value = max(value, self.range[0])  # Ensure value is within valid range\n        return math.log2(value)\n    \nclass LogContinuous(Variable):\n    def __init__(self, name, range_):\n        super().__init__(name, \"log_continuous\")\n        self.range = range_\n        \n        self.is_discrete = False\n\n    @property\n    def search_space_range(self):\n        return self.range\n\n    def map2design(self, value):\n        return 10**value\n\n    def map2search(self, value):\n        return math.log10(value)\n\n"
  },
  {
    "path": "transopt/utils/Initialization.py",
    "content": "import random\r\nimport sobol_seq\r\nimport numpy as np\r\nfrom sklearn.cluster import KMeans\r\n\r\n\r\n\r\n\r\ndef InitData(Init_method, KB, Init, Xdim, Dty, **kwargs):\r\n\r\n    type = Init_method.split('_')[0]\r\n    method = Init_method.split('_')[1]\r\n    if type=='Continuous':\r\n        if method == 'random':\r\n            train_x = 2 * np.random.random(size=(Init, Xdim)) - 1\r\n        elif method == 'uniform':\r\n            train_x = 2 * sobol_seq.i4_sobol_generate(Xdim, Init) - 1\r\n        elif method == 'fix':\r\n            if KB.len == 0:\r\n                train_x = np.array([[-0.5],[-0.25],[0.5],[0.42]])\r\n            else:\r\n                train_x = np.array([[-0.1], [-0.8], [0.25], [0.4]])\r\n        elif method == 'LFL':\r\n            seed = kwargs['seed']\r\n            quantile = kwargs['quantile']\r\n            try:\r\n                train_x = np.loadtxt(f'./Bench/Lifelone_env/randIni/ini_{Xdim}d_{Init}p_{seed}.txt')\r\n                if len(train_x.shape) == 1:\r\n                    train_x = train_x[:,np.newaxis]\r\n            except:\r\n                train_x = 2 * np.random.random(size=(Init, Xdim)) - 1\r\n                np.savetxt(f'./Bench/Lifelone_env/randIni/ini_{Xdim}d_{Init}p_{seed}.txt', train_x)\r\n            anchor_point_num = int(quantile * Init)\r\n            temp_x = train_x[:anchor_point_num]\r\n            random_x = 2 * np.random.random(size=(100*Xdim, Xdim)) - 1\r\n            train_x = np.vstack((temp_x, random_x[-(Init-anchor_point_num):]))\r\n        idxs = None\r\n    elif type=='Tabular':\r\n        if method == 'random':\r\n            if 'Env' in kwargs.keys():\r\n                data_num = kwargs['Env'].get_dataset_size()\r\n                rand_idxs = random.sample(range(0, data_num), Init)\r\n                train_x = kwargs['Env'].get_var(rand_idxs)\r\n                idxs = rand_idxs\r\n        # elif Method == 'grid':\r\n        #     if KB.len == 0:\r\n        #         if np.float64 == Dty:\r\n        #             train_x = 2 * np.random.random(size=(Init, Xdim)) - 1\r\n        #         else:\r\n        #             print('Unsupport data type! shut down')\r\n        #             return\r\n        #     else:\r\n        #         train_x = KB.local_optimal[0]\r\n        #         for i in range(1, KB.len):\r\n        #             train_x = np.vstack((train_x, KB.local_optimal[i]))\r\n        #         train_x = np.unique(train_x, axis=0)\r\n        #\r\n        #         if len(train_x) == Init:\r\n        #             pass\r\n        #             # train_x = np.array(train_x, dtype=Dty)\r\n        #         elif len(train_x) > Init:\r\n        #             result_x = []\r\n        #             kmn = KMeans(n_clusters=int(Init), random_state=0)\r\n        #             kmn.fit(train_x)\r\n        #             lables = kmn.labels_\r\n        #             centers = kmn.cluster_centers_\r\n        #             for c_id,center in enumerate(centers):\r\n        #                 min_dis = 100\r\n        #                 min_dis_x_id = 0\r\n        #                 for x_id, x in enumerate(train_x):\r\n        #                     if lables[x_id] == c_id:\r\n        #                         dis = np.linalg.norm(x - center)\r\n        #                         if dis < min_dis:\r\n        #                             min_dis = dis\r\n        #                             min_dis_x_id = x_id\r\n        #                 result_x.append(train_x[min_dis_x_id])\r\n        #\r\n        #             train_x = np.array(result_x)\r\n        #             # train_x = np.concatenate(\r\n        #             #     (train_x, 2 * np.random.random(size=(Init - len(train_x), Xdim)) - 1))\r\n        #         else:\r\n        #             # train_x = np.array(train_x, dtype=Dty)\r\n        #             train_x = np.concatenate(\r\n        #                 (train_x, 2 * np.random.random(size=(Init - len(train_x), Xdim)) - 1))\r\n    else:\r\n        raise ValueError\r\n\r\n    return train_x, idxs\r\n"
  },
  {
    "path": "transopt/utils/Kernel.py",
    "content": "import GPy\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom GPy.mappings.constant import Constant\nfrom GPy.inference.latent_function_inference import expectation_propagation\nfrom GPy.inference.latent_function_inference import ExactGaussianInference\n\n\n\ndef construct_multi_objective_kernel(input_dim, output_dim, base_kernel='RBF', Q=1, rank=2):\n    # Choose the base kernel. Note: This part can be improved since it currently always chooses RBF.\n    k = GPy.kern.RBF(input_dim=input_dim)\n\n    kernel_list = [k] * Q\n    j = 1\n    kk = kernel_list[0]\n    K = kk.prod(\n        GPy.kern.Coregionalize(1, output_dim, active_dims=[input_dim], rank=rank, W=None, kappa=None,\n                               name='B'), name='%s%s' % ('ICM', 0))\n    for kernel in kernel_list[1:]:\n        K += kernel.prod(\n            GPy.kern.Coregionalize(1, output_dim, active_dims=[input_dim], rank=rank, W=None,\n                                   kappa=None, name='B'), name='%s%s' % ('ICM', j))\n        j += 1\n    return K"
  },
  {
    "path": "transopt/utils/Normalization.py",
    "content": "import numpy as np\r\nfrom typing import Union, Dict, List\r\nfrom sklearn.preprocessing import power_transform\r\nfrom agent.registry import normalizer_registry,normalizer_register\r\n\r\ndef get_normalizer(name):\r\n    \"\"\"Create the optimizer object.\"\"\"\r\n    normalizer = normalizer_registry.get(name)\r\n\r\n\r\n    if normalizer is not None:\r\n        return normalizer\r\n    else:\r\n        # 处理任务名称不在注册表中的情况\r\n        print(f\"Normalizer '{name}' not found in the registry.\")\r\n        raise NameError\r\n\r\n\r\n@normalizer_register('pt')\r\ndef normalize_with_power_transform(data: Union[np.ndarray, list], mean=None, std=None):\r\n    \"\"\"\r\n    Normalize the data using mean and standard deviation, followed by power transformation.\r\n\r\n    Parameters:\r\n    - data (Union[np.ndarray, list]): Input data to be normalized.\r\n    - mean (float, optional): Mean for normalization.\r\n    - std (float, optional): Std for normalization.\r\n\r\n    Returns:\r\n    - Union[np.ndarray, list]: Normalized and power transformed data.\r\n    \"\"\"\r\n\r\n    # Handle multiple data sets (list of ndarrays)\r\n    if type(data) is list:\r\n        all_include = data[0]\r\n        data_len = [0, len(data[0])]\r\n        for Y in data[1:]:\r\n            all_include = np.concatenate((all_include, Y), axis=0)\r\n            data_len.append(len(all_include))\r\n    else:  # Single data set\r\n        all_include = data\r\n        data_len = [0, len(data)]\r\n\r\n    # Calculate mean and std if not provided\r\n    if mean is None:\r\n        mean = np.mean(all_include)\r\n    if std is None:\r\n        std = np.std(all_include)\r\n\r\n    # Normalize and power transform\r\n    all_include = power_transform((all_include - mean) / std, method='yeo-johnson')\r\n\r\n    # Split back into multiple data sets if originally provided as a list\r\n    if type(data) is list:\r\n        new_data = []\r\n        for i in range(len(data_len) - 1):\r\n            new_data.append(all_include[data_len[i]:data_len[i + 1]])\r\n        return new_data\r\n\r\n    # Return the transformed data\r\n    return all_include\r\n\r\n\r\ndef rank_normalize_with_power_transform(data: Union[np.ndarray, list]):\r\n    \"\"\"\r\n    This function first replaces the actual values of the data with their ranks.\r\n    After that, it standardizes and then applies a power transform (yeo-johnson) on the data.\r\n\r\n    Args:\r\n    - data (Union[np.ndarray, list]): The input data, either as a single ndarray or as a list of ndarrays.\r\n    - mean (float, optional): Mean value to use for standardization. If not provided, it's computed from data.\r\n    - std (float, optional): Standard deviation value to use for standardization. If not provided, it's computed from data.\r\n\r\n    Returns:\r\n    - np.ndarray or list of np.ndarray: Transformed data.\r\n    \"\"\"\r\n\r\n    # Single ndarray input\r\n    if isinstance(data, np.ndarray):\r\n        # Replace the values in data with their corresponding ranks\r\n        sorted_indices = np.argsort(data, axis=0)[:, 0]\r\n        rank_array = np.zeros(shape=data.shape[0])\r\n        rank_array[sorted_indices] = np.arange(1, len(data) + 1)\r\n\r\n        # Apply standardization followed by power transformation\r\n        return power_transform(rank_array[:, np.newaxis], method='yeo-johnson')\r\n\r\n    # List of ndarrays input\r\n    elif isinstance(data, list):\r\n        new_data = []\r\n        all_include = data[0]\r\n        data_len = [0, len(data[0])]\r\n\r\n        # Combine all datasets in the list for subsequent processing\r\n        for Y in data[1:]:\r\n            all_include = np.concatenate((all_include, Y), axis=0)\r\n            data_len.append(len(all_include))\r\n\r\n        # Replace the values in combined data with their corresponding ranks\r\n        sorted_indices = np.argsort(all_include, axis=0)[:, 0]\r\n        rank_array = np.zeros(shape=all_include.shape[0])\r\n        rank_array[sorted_indices] = np.arange(1, len(all_include) + 1)\r\n\r\n\r\n        # Apply standardization followed by power transformation\r\n        all_include = power_transform((rank_array[:, np.newaxis]), method='yeo-johnson')\r\n\r\n        # Split the transformed data back into separate datasets based on the original list\r\n        for i in range(len(data_len) - 1):\r\n            new_data.append(all_include[data_len[i]:data_len[i + 1]])\r\n\r\n        return new_data\r\n\r\n    # Raise an error for unsupported input types\r\n    raise ValueError('Unsupported input type for normalization and power transform.')\r\n\r\n\r\n@normalizer_register('norm')\r\ndef normalize(data:Union[List, Dict, np.ndarray], mean=None, std=None):\r\n    \"\"\"\r\n    Normalize the data using the given mean and standard deviation or compute them from the data if not provided.\r\n\r\n    Parameters:\r\n    - data (ndarray): The data to be normalized.\r\n    - mean (float, optional): If provided, use this mean for normalization. Otherwise, compute from the data.\r\n    - std (float, optional): If provided, use this standard deviation for normalization. Otherwise, compute from the data.\r\n\r\n    Returns:\r\n    - ndarray: Normalized data.\r\n    \"\"\"\r\n\r\n\r\n    # Compute mean and std from data if not provided\r\n    if isinstance(data, np.ndarray):\r\n        if mean is None:\r\n            mean = np.mean(data)\r\n        if std is None:\r\n            std = np.std(data)\r\n        return (data - mean) / std\r\n    elif isinstance(data, list):\r\n        tmp = []\r\n        for d in data:\r\n            if mean is None:\r\n                mean = np.mean(d)\r\n            if std is None:\r\n                std = np.std(d)\r\n            tmp.append((d - mean) / std)\r\n        return tmp\r\n    else:\r\n        raise TypeError(\"Input data must be a numpy array or a list of numpy arrays.\")\r\n"
  },
  {
    "path": "transopt/utils/Prior.py",
    "content": "# Copyright (c) 2012 - 2014, GPy authors (see AUTHORS.txt).\n# Licensed under the BSD 3-clause license (see LICENSE.txt)\n\n\nimport warnings\nimport weakref\nimport numpy as np\nfrom scipy.special import gammaln, digamma\nfrom GPy.util.linalg import pdinv\nfrom paramz.domains import _REAL, _POSITIVE, _NEGATIVE\n\n\nclass Prior(object):\n    domain = None\n    _instance = None\n    def __new__(cls, *args, **kwargs):\n        if not cls._instance or cls._instance.__class__ is not cls:\n                newfunc = super(Prior, cls).__new__\n                if newfunc is object.__new__:\n                    cls._instance = newfunc(cls)\n                else:\n                    cls._instance = newfunc(cls, *args, **kwargs)\n                return cls._instance\n\n    def pdf(self, x):\n        return np.exp(self.lnpdf(x))\n\n    def plot(self):\n        import sys\n\n        assert \"matplotlib\" in sys.modules, \"matplotlib package has not been imported.\"\n        from GPy.plotting.matplot_dep import priors_plots\n\n        priors_plots.univariate_plot(self)\n\n    def __repr__(self, *args, **kwargs):\n        return self.__str__()\n\n\nclass Gaussian(Prior):\n    \"\"\"\n    Implementation of the univariate Gaussian probability function, coupled with random variables.\n\n    :param mu: mean\n    :param sigma: standard deviation\n\n    .. Note:: Bishop 2006 notation is used throughout the code\n\n    \"\"\"\n    domain = _REAL\n    _instances = []\n\n    def __new__(cls, mu=0, sigma=1):  # Singleton:\n        if cls._instances:\n            cls._instances[:] = [instance for instance in cls._instances if instance()]\n            for instance in cls._instances:\n                if instance().mu == mu and instance().sigma == sigma:\n                    return instance()\n        newfunc = super(Prior, cls).__new__\n        if newfunc is object.__new__:\n            o = newfunc(cls)\n        else:\n            o = newfunc(cls, mu, sigma)\n        cls._instances.append(weakref.ref(o))\n        return cls._instances[-1]()\n\n    def __init__(self, mu, sigma):\n        self.mu = float(mu)\n        self.sigma = float(sigma)\n        self.sigma2 = np.square(self.sigma)\n        self.constant = -0.5 * np.log(2 * np.pi * self.sigma2)\n\n    def __str__(self):\n        return \"N({:.2g}, {:.2g})\".format(self.mu, self.sigma)\n\n    def lnpdf(self, x):\n        return self.constant - 0.5 * np.square(x - self.mu) / self.sigma2\n\n    def lnpdf_grad(self, x):\n        return -(x - self.mu) / self.sigma2\n\n    def rvs(self, n):\n        return np.random.randn(n) * self.sigma + self.mu\n\n    def getstate(self):\n        return self.mu, self.sigma\n\n    def setstate(self, state):\n        self.mu = state[0]\n        self.sigma = state[1]\n        self.sigma2 = np.square(self.sigma)\n        self.constant = -0.5 * np.log(2 * np.pi * self.sigma2)\n\nclass Uniform(Prior):\n    _instances = []\n\n    def __new__(cls, lower=0, upper=1):  # Singleton:\n        if cls._instances:\n            cls._instances[:] = [instance for instance in cls._instances if instance()]\n            for instance in cls._instances:\n                if instance().lower == lower and instance().upper == upper:\n                    return instance()\n        newfunc = super(Prior, cls).__new__\n        if newfunc is object.__new__:\n            o = newfunc(cls)\n        else:\n            o = newfunc(cls, lower, upper)\n        cls._instances.append(weakref.ref(o))\n        return cls._instances[-1]()\n\n    def __init__(self, lower, upper):\n        self.lower = float(lower)\n        self.upper = float(upper)\n        assert self.lower < self.upper, \"Lower needs to be strictly smaller than upper.\"\n        if self.lower >= 0:\n            self.domain = _POSITIVE\n        elif self.upper <= 0:\n            self.domain = _NEGATIVE\n        else:\n            self.domain = _REAL\n\n    def __str__(self):\n        return \"[{:.2g}, {:.2g}]\".format(self.lower, self.upper)\n\n    def lnpdf(self, x):\n        region = (x >= self.lower) * (x <= self.upper)\n        return region\n\n    def lnpdf_grad(self, x):\n        return np.zeros(x.shape)\n\n    def rvs(self, n):\n        return np.random.uniform(self.lower, self.upper, size=n)\n\n#     def __getstate__(self):\n#         return self.lower, self.upper\n#\n#     def __setstate__(self, state):\n#         self.lower = state[0]\n#         self.upper = state[1]\n\nclass LogGaussian(Gaussian):\n    \"\"\"\n    Implementation of the univariate *log*-Gaussian probability function, coupled with random variables.\n\n    :param mu: mean\n    :param sigma: standard deviation\n\n    .. Note:: Bishop 2006 notation is used throughout the code\n\n    \"\"\"\n    domain = _POSITIVE\n    _instances = []\n\n    def __new__(cls, mu=0, sigma=1, name=''):  # Singleton:\n        # if cls._instances:\n        #     cls._instances[:] = [instance for instance in cls._instances if instance()]\n        #     for instance in cls._instances:\n        #         if instance().mu == mu and instance().sigma == sigma:\n        #             return instance()\n        newfunc = super(Prior, cls).__new__\n        if newfunc is object.__new__:\n            o = newfunc(cls)\n        else:\n            o = newfunc(cls, mu, sigma)\n        cls._instances.append(weakref.ref(o))\n        return cls._instances[-1]()\n\n    def __init__(self, mu, sigma, name):\n        self.mu = float(mu)\n        self.sigma = float(sigma)\n        self.sigma2 = np.square(self.sigma)\n        self.constant = -0.5 * np.log(2 * np.pi * self.sigma2)\n        self.name = name\n\n    def __str__(self):\n        return \"lnN({:.2g}, {:.2g})\".format(self.mu, self.sigma)\n\n    def lnpdf(self, x):\n        return self.constant - 0.5 * np.square(np.log(x) - self.mu) / self.sigma2 - np.log(x)\n\n    def lnpdf_grad(self, x):\n        return -((np.log(x) - self.mu) / self.sigma2 + 1.) / x\n\n    def rvs(self, n):\n        return np.exp(np.random.randn(int(n)) * self.sigma + self.mu)\n\n    def getstate(self):\n        return self.mu, self.sigma\n\n    def setstate(self, state):\n        self.mu = state[0]\n        self.sigma = state[1]\n        self.sigma2 = np.square(self.sigma)\n        self.constant = -0.5 * np.log(2 * np.pi * self.sigma2)\n\nclass MultivariateGaussian(Prior):\n    \"\"\"\n    Implementation of the multivariate Gaussian probability function, coupled with random variables.\n\n    :param mu: mean (N-dimensional array)\n    :param var: covariance matrix (NxN)\n\n    .. Note:: Bishop 2006 notation is used throughout the code\n\n    \"\"\"\n    domain = _REAL\n    _instances = []\n\n    def __new__(cls, mu=0, var=1):  # Singleton:\n        if cls._instances:\n            cls._instances[:] = [instance for instance in cls._instances if\n                                 instance()]\n            for instance in cls._instances:\n                if np.all(instance().mu == mu) and np.all(\n                        instance().var == var):\n                    return instance()\n        newfunc = super(Prior, cls).__new__\n        if newfunc is object.__new__:\n            o = newfunc(cls)\n        else:\n            o = newfunc(cls, mu, var)\n        cls._instances.append(weakref.ref(o))\n        return cls._instances[-1]()\n\n    def __init__(self, mu, var):\n        self.mu = np.array(mu).flatten()\n        self.var = np.array(var)\n        assert len(self.var.shape) == 2, 'Covariance must be a matrix'\n        assert self.var.shape[0] == self.var.shape[1], \\\n            'Covariance must be a square matrix'\n        assert self.var.shape[0] == self.mu.size\n        self.input_dim = self.mu.size\n        self.inv, _, self.hld, _ = pdinv(self.var)\n        self.constant = -0.5 * (self.input_dim * np.log(2 * np.pi) + self.hld)\n\n    def __str__(self):\n        return 'MultiN(' + str(self.mu) + ', ' + str(np.diag(self.var)) + ')'\n\n    def summary(self):\n        raise NotImplementedError\n\n    def pdf(self, x):\n        x = np.array(x).flatten()\n        return np.exp(self.lnpdf(x))\n\n    def lnpdf(self, x):\n        x = np.array(x).flatten()\n        d = x - self.mu\n        return self.constant - 0.5 * np.dot(d.T, np.dot(self.inv, d))\n\n    def lnpdf_grad(self, x):\n        x = np.array(x).flatten()\n        d = x - self.mu\n        return - np.dot(self.inv, d)\n\n    def rvs(self, n):\n        return np.random.multivariate_normal(self.mu, self.var, n)\n\n    def plot(self):\n        import sys\n\n        assert \"matplotlib\" in sys.modules, \"matplotlib package has not been imported.\"\n        from GPy.plotting.matplot_dep import priors_plots\n\n        priors_plots.multivariate_plot(self)\n\n    def __getstate__(self):\n        return self.mu, self.var\n\n    def __setstate__(self, state):\n        self.mu = np.array(state[0]).flatten()\n        self.var = state[1]\n        assert len(self.var.shape) == 2, 'Covariance must be a matrix'\n        assert self.var.shape[0] == self.var.shape[1], \\\n            'Covariance must be a square matrix'\n        assert self.var.shape[0] == self.mu.size\n        self.input_dim = self.mu.size\n        self.inv, _, self.hld, _ = pdinv(self.var)\n        self.constant = -0.5 * (self.input_dim * np.log(2 * np.pi) + self.hld)\n\ndef gamma_from_EV(E, V):\n    warnings.warn(\"use Gamma.from_EV to create Gamma Prior\", FutureWarning)\n    return Gamma.from_EV(E, V)\n\n\nclass Gamma(Prior):\n    \"\"\"\n    Implementation of the Gamma probability function, coupled with random variables.\n\n    :param a: shape parameter\n    :param b: rate parameter (warning: it's the *inverse* of the scale)\n\n    .. Note:: Bishop 2006 notation is used throughout the code\n\n    \"\"\"\n    domain = _POSITIVE\n    _instances = []\n\n    def __new__(cls, a=1, b=.5, name = ''):  # Singleton:\n        if cls._instances:\n            cls._instances[:] = [instance for instance in cls._instances if instance()]\n            for instance in cls._instances:\n                if instance().a == a and instance().b == b and instance().name == name:\n                    return instance()\n        newfunc = super(Prior, cls).__new__\n        if newfunc is object.__new__:\n            o = newfunc(cls)\n        else:\n            o = newfunc(cls, a, b)\n        cls._instances.append(weakref.ref(o))\n        return cls._instances[-1]()\n\n    @property\n    def a(self):\n        return self._a\n\n    @property\n    def b(self):\n        return self._b\n\n    def __init__(self, a, b, name=''):\n        self._a = float(a)\n        self._b = float(b)\n        self.name = name\n        self.constant = -gammaln(self.a) + a * np.log(b)\n\n    def __str__(self):\n        return \"Ga({:.2g}, {:.2g})\".format(self.a, self.b)\n\n    def summary(self):\n        ret = {\"E[x]\": self.a / self.b, \\\n               \"E[ln x]\": digamma(self.a) - np.log(self.b), \\\n               \"var[x]\": self.a / self.b / self.b, \\\n               \"Entropy\": gammaln(self.a) - (self.a - 1.) * digamma(self.a) - np.log(self.b) + self.a}\n        if self.a > 1:\n            ret['Mode'] = (self.a - 1.) / self.b\n        else:\n            ret['mode'] = np.nan\n        return ret\n\n    def lnpdf(self, x):\n        return self.constant + (self.a - 1) * np.log(x) - self.b * x\n\n    def lnpdf_grad(self, x):\n        return (self.a - 1.) / x - self.b\n\n    def rvs(self, n):\n        return np.random.gamma(scale=1. / self.b, shape=self.a, size=n)\n\n\n    def getstate(self):\n        return self.a, self.b\n\n    def update(self, value):\n        self._a += 1\n        self._b += value\n\n    @staticmethod\n    def from_EV(E, V):\n        \"\"\"\n        Creates an instance of a Gamma Prior  by specifying the Expected value(s)\n        and Variance(s) of the distribution.\n\n        :param E: expected value\n        :param V: variance\n        \"\"\"\n        a = np.square(E) / V\n        b = E / V\n        return Gamma(a, b)\n\n    def __getstate__(self):\n        return self.a, self.b\n\n    def __setstate__(self, state):\n        self._a = state[0]\n        self._b = state[1]\n        self.constant = -gammaln(self.a) + self.a * np.log(self.b)\n\nclass InverseGamma(Gamma):\n    \"\"\"\n    Implementation of the inverse-Gamma probability function, coupled with random variables.\n\n    :param a: shape parameter\n    :param b: rate parameter (warning: it's the *inverse* of the scale)\n\n    .. Note:: Bishop 2006 notation is used throughout the code\n\n    \"\"\"\n    domain = _POSITIVE\n    _instances = []\n\n    def __str__(self):\n        return \"iGa({:.2g}, {:.2g})\".format(self.a, self.b)\n\n    def summary(self):\n        return {}\n\n    @staticmethod\n    def from_EV(E, V):\n        raise NotImplementedError\n\n    def lnpdf(self, x):\n        return self.constant - (self.a + 1) * np.log(x) - self.b / x\n\n    def lnpdf_grad(self, x):\n        return -(self.a + 1.) / x + self.b / x ** 2\n\n    def rvs(self, n):\n        return 1. / np.random.gamma(scale=1. / self.b, shape=self.a, size=n)\n\nclass DGPLVM_KFDA(Prior):\n    \"\"\"\n    Implementation of the Discriminative Gaussian Process Latent Variable function using\n    Kernel Fisher Discriminant Analysis by Seung-Jean Kim for implementing Face paper\n    by Chaochao Lu.\n\n    :param lambdaa: constant\n    :param sigma2: constant\n\n    .. Note:: Surpassing Human-Level Face paper dgplvm implementation\n\n    \"\"\"\n    domain = _REAL\n    # _instances = []\n    # def __new__(cls, lambdaa, sigma2):  # Singleton:\n    #     if cls._instances:\n    #         cls._instances[:] = [instance for instance in cls._instances if instance()]\n    #         for instance in cls._instances:\n    #             if instance().mu == mu and instance().sigma == sigma:\n    #                 return instance()\n    #     o = super(Prior, cls).__new__(cls, mu, sigma)\n    #     cls._instances.append(weakref.ref(o))\n    #     return cls._instances[-1]()\n\n    def __init__(self, lambdaa, sigma2, lbl, kern, x_shape):\n        \"\"\"A description for init\"\"\"\n        self.datanum = lbl.shape[0]\n        self.classnum = lbl.shape[1]\n        self.lambdaa = lambdaa\n        self.sigma2 = sigma2\n        self.lbl = lbl\n        self.kern = kern\n        lst_ni = self.compute_lst_ni()\n        self.a = self.compute_a(lst_ni)\n        self.A = self.compute_A(lst_ni)\n        self.x_shape = x_shape\n\n    def get_class_label(self, y):\n        for idx, v in enumerate(y):\n            if v == 1:\n                return idx\n        return -1\n\n    # This function assigns each data point to its own class\n    # and returns the dictionary which contains the class name and parameters.\n    def compute_cls(self, x):\n        cls = {}\n        # Appending each data point to its proper class\n        for j in range(self.datanum):\n            class_label = self.get_class_label(self.lbl[j])\n            if class_label not in cls:\n                cls[class_label] = []\n            cls[class_label].append(x[j])\n        if len(cls) > 2:\n            for i in range(2, self.classnum):\n                del cls[i]\n        return cls\n\n    def x_reduced(self, cls):\n        x1 = cls[0]\n        x2 = cls[1]\n        x = np.concatenate((x1, x2), axis=0)\n        return x\n\n    def compute_lst_ni(self):\n        lst_ni = []\n        lst_ni1 = []\n        lst_ni2 = []\n        f1 = (np.where(self.lbl[:, 0] == 1)[0])\n        f2 = (np.where(self.lbl[:, 1] == 1)[0])\n        for idx in f1:\n            lst_ni1.append(idx)\n        for idx in f2:\n            lst_ni2.append(idx)\n        lst_ni.append(len(lst_ni1))\n        lst_ni.append(len(lst_ni2))\n        return lst_ni\n\n    def compute_a(self, lst_ni):\n        a = np.ones((self.datanum, 1))\n        count = 0\n        for N_i in lst_ni:\n            if N_i == lst_ni[0]:\n                a[count:count + N_i] = (float(1) / N_i) * a[count]\n                count += N_i\n            else:\n                if N_i == lst_ni[1]:\n                    a[count: count + N_i] = -(float(1) / N_i) * a[count]\n                    count += N_i\n        return a\n\n    def compute_A(self, lst_ni):\n        A = np.zeros((self.datanum, self.datanum))\n        idx = 0\n        for N_i in lst_ni:\n            B = float(1) / np.sqrt(N_i) * (np.eye(N_i) - ((float(1) / N_i) * np.ones((N_i, N_i))))\n            A[idx:idx + N_i, idx:idx + N_i] = B\n            idx += N_i\n        return A\n\n    # Here log function\n    def lnpdf(self, x):\n        x = x.reshape(self.x_shape)\n        K = self.kern.K(x)\n        a_trans = np.transpose(self.a)\n        paran = self.lambdaa * np.eye(x.shape[0]) + self.A.dot(K).dot(self.A)\n        inv_part = pdinv(paran)[0]\n        J = a_trans.dot(K).dot(self.a) - a_trans.dot(K).dot(self.A).dot(inv_part).dot(self.A).dot(K).dot(self.a)\n        J_star = (1. / self.lambdaa) * J\n        return (-1. / self.sigma2) * J_star\n\n    # Here gradient function\n    def lnpdf_grad(self, x):\n        x = x.reshape(self.x_shape)\n        K = self.kern.K(x)\n        paran = self.lambdaa * np.eye(x.shape[0]) + self.A.dot(K).dot(self.A)\n        inv_part = pdinv(paran)[0]\n        b = self.A.dot(inv_part).dot(self.A).dot(K).dot(self.a)\n        a_Minus_b = self.a - b\n        a_b_trans = np.transpose(a_Minus_b)\n        DJ_star_DK = (1. / self.lambdaa) * (a_Minus_b.dot(a_b_trans))\n        DJ_star_DX = self.kern.gradients_X(DJ_star_DK, x)\n        return (-1. / self.sigma2) * DJ_star_DX\n\n    def rvs(self, n):\n        return np.random.rand(n)  # A WRONG implementation\n\n    def __str__(self):\n        return 'DGPLVM_prior'\n\n    def __getstate___(self):\n        return self.lbl, self.lambdaa, self.sigma2, self.kern, self.x_shape\n\n    def __setstate__(self, state):\n        lbl, lambdaa, sigma2, kern, a, A, x_shape = state\n        self.datanum = lbl.shape[0]\n        self.classnum = lbl.shape[1]\n        self.lambdaa = lambdaa\n        self.sigma2 = sigma2\n        self.lbl = lbl\n        self.kern = kern\n        lst_ni = self.compute_lst_ni()\n        self.a = self.compute_a(lst_ni)\n        self.A = self.compute_A(lst_ni)\n        self.x_shape = x_shape\n\n\nclass DGPLVM(Prior):\n    \"\"\"\n    Implementation of the Discriminative Gaussian Process Latent Variable model paper, by Raquel.\n\n    :param sigma2: constant\n\n    .. Note:: DGPLVM for Classification paper implementation\n\n    \"\"\"\n    domain = _REAL\n\n    def __new__(cls, sigma2, lbl, x_shape):\n        return super(Prior, cls).__new__(cls, sigma2, lbl, x_shape)\n\n    def __init__(self, sigma2, lbl, x_shape):\n        self.sigma2 = sigma2\n        # self.x = x\n        self.lbl = lbl\n        self.classnum = lbl.shape[1]\n        self.datanum = lbl.shape[0]\n        self.x_shape = x_shape\n        self.dim = x_shape[1]\n\n    def get_class_label(self, y):\n        for idx, v in enumerate(y):\n            if v == 1:\n                return idx\n        return -1\n\n    # This function assigns each data point to its own class\n    # and returns the dictionary which contains the class name and parameters.\n    def compute_cls(self, x):\n        cls = {}\n        # Appending each data point to its proper class\n        for j in range(self.datanum):\n            class_label = self.get_class_label(self.lbl[j])\n            if class_label not in cls:\n                cls[class_label] = []\n            cls[class_label].append(x[j])\n        return cls\n\n    # This function computes mean of each class. The mean is calculated through each dimension\n    def compute_Mi(self, cls):\n        M_i = np.zeros((self.classnum, self.dim))\n        for i in cls:\n            # Mean of each class\n            class_i = cls[i]\n            M_i[i] = np.mean(class_i, axis=0)\n        return M_i\n\n    # Adding data points as tuple to the dictionary so that we can access indices\n    def compute_indices(self, x):\n        data_idx = {}\n        for j in range(self.datanum):\n            class_label = self.get_class_label(self.lbl[j])\n            if class_label not in data_idx:\n                data_idx[class_label] = []\n            t = (j, x[j])\n            data_idx[class_label].append(t)\n        return data_idx\n\n    # Adding indices to the list so we can access whole the indices\n    def compute_listIndices(self, data_idx):\n        lst_idx = []\n        lst_idx_all = []\n        for i in data_idx:\n            if len(lst_idx) == 0:\n                pass\n                #Do nothing, because it is the first time list is created so is empty\n            else:\n                lst_idx = []\n            # Here we put indices of each class in to the list called lst_idx_all\n            for m in range(len(data_idx[i])):\n                lst_idx.append(data_idx[i][m][0])\n            lst_idx_all.append(lst_idx)\n        return lst_idx_all\n\n    # This function calculates between classes variances\n    def compute_Sb(self, cls, M_i, M_0):\n        Sb = np.zeros((self.dim, self.dim))\n        for i in cls:\n            B = (M_i[i] - M_0).reshape(self.dim, 1)\n            B_trans = B.transpose()\n            Sb += (float(len(cls[i])) / self.datanum) * B.dot(B_trans)\n        return Sb\n\n    # This function calculates within classes variances\n    def compute_Sw(self, cls, M_i):\n        Sw = np.zeros((self.dim, self.dim))\n        for i in cls:\n            N_i = float(len(cls[i]))\n            W_WT = np.zeros((self.dim, self.dim))\n            for xk in cls[i]:\n                W = (xk - M_i[i])\n                W_WT += np.outer(W, W)\n            Sw += (N_i / self.datanum) * ((1. / N_i) * W_WT)\n        return Sw\n\n    # Calculating beta and Bi for Sb\n    def compute_sig_beta_Bi(self, data_idx, M_i, M_0, lst_idx_all):\n        # import pdb\n        # pdb.set_trace()\n        B_i = np.zeros((self.classnum, self.dim))\n        Sig_beta_B_i_all = np.zeros((self.datanum, self.dim))\n        for i in data_idx:\n            # pdb.set_trace()\n            # Calculating Bi\n            B_i[i] = (M_i[i] - M_0).reshape(1, self.dim)\n        for k in range(self.datanum):\n            for i in data_idx:\n                N_i = float(len(data_idx[i]))\n                if k in lst_idx_all[i]:\n                    beta = (float(1) / N_i) - (float(1) / self.datanum)\n                    Sig_beta_B_i_all[k] += float(N_i) / self.datanum * (beta * B_i[i])\n                else:\n                    beta = -(float(1) / self.datanum)\n                    Sig_beta_B_i_all[k] += float(N_i) / self.datanum * (beta * B_i[i])\n        Sig_beta_B_i_all = Sig_beta_B_i_all.transpose()\n        return Sig_beta_B_i_all\n\n\n    # Calculating W_j s separately so we can access all the W_j s anytime\n    def compute_wj(self, data_idx, M_i):\n        W_i = np.zeros((self.datanum, self.dim))\n        for i in data_idx:\n            N_i = float(len(data_idx[i]))\n            for tpl in data_idx[i]:\n                xj = tpl[1]\n                j = tpl[0]\n                W_i[j] = (xj - M_i[i])\n        return W_i\n\n    # Calculating alpha and Wj for Sw\n    def compute_sig_alpha_W(self, data_idx, lst_idx_all, W_i):\n        Sig_alpha_W_i = np.zeros((self.datanum, self.dim))\n        for i in data_idx:\n            N_i = float(len(data_idx[i]))\n            for tpl in data_idx[i]:\n                k = tpl[0]\n                for j in lst_idx_all[i]:\n                    if k == j:\n                        alpha = 1 - (float(1) / N_i)\n                        Sig_alpha_W_i[k] += (alpha * W_i[j])\n                    else:\n                        alpha = 0 - (float(1) / N_i)\n                        Sig_alpha_W_i[k] += (alpha * W_i[j])\n        Sig_alpha_W_i = (1. / self.datanum) * np.transpose(Sig_alpha_W_i)\n        return Sig_alpha_W_i\n\n    # This function calculates log of our prior\n    def lnpdf(self, x):\n        x = x.reshape(self.x_shape)\n        cls = self.compute_cls(x)\n        M_0 = np.mean(x, axis=0)\n        M_i = self.compute_Mi(cls)\n        Sb = self.compute_Sb(cls, M_i, M_0)\n        Sw = self.compute_Sw(cls, M_i)\n        # sb_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))\n        #Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)\n        #Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))[0]\n        Sb_inv_N = pdinv(Sb + np.eye(Sb.shape[0])*0.1)[0]\n        return (-1 / self.sigma2) * np.trace(Sb_inv_N.dot(Sw))\n\n    # This function calculates derivative of the log of prior function\n    def lnpdf_grad(self, x):\n        x = x.reshape(self.x_shape)\n        cls = self.compute_cls(x)\n        M_0 = np.mean(x, axis=0)\n        M_i = self.compute_Mi(cls)\n        Sb = self.compute_Sb(cls, M_i, M_0)\n        Sw = self.compute_Sw(cls, M_i)\n        data_idx = self.compute_indices(x)\n        lst_idx_all = self.compute_listIndices(data_idx)\n        Sig_beta_B_i_all = self.compute_sig_beta_Bi(data_idx, M_i, M_0, lst_idx_all)\n        W_i = self.compute_wj(data_idx, M_i)\n        Sig_alpha_W_i = self.compute_sig_alpha_W(data_idx, lst_idx_all, W_i)\n\n        # Calculating inverse of Sb and its transpose and minus\n        # Sb_inv_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))\n        #Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)\n        #Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))[0]\n        Sb_inv_N = pdinv(Sb + np.eye(Sb.shape[0])*0.1)[0]\n        Sb_inv_N_trans = np.transpose(Sb_inv_N)\n        Sb_inv_N_trans_minus = -1 * Sb_inv_N_trans\n        Sw_trans = np.transpose(Sw)\n\n        # Calculating DJ/DXk\n        DJ_Dxk = 2 * (\n            Sb_inv_N_trans_minus.dot(Sw_trans).dot(Sb_inv_N_trans).dot(Sig_beta_B_i_all) + Sb_inv_N_trans.dot(\n                Sig_alpha_W_i))\n        # Calculating derivative of the log of the prior\n        DPx_Dx = ((-1 / self.sigma2) * DJ_Dxk)\n        return DPx_Dx.T\n\n    # def frb(self, x):\n    #     from functools import partial\n    #     from GPy.models import GradientChecker\n    #     f = partial(self.lnpdf)\n    #     df = partial(self.lnpdf_grad)\n    #     grad = GradientChecker(f, df, x, 'X')\n    #     grad.checkgrad(verbose=1)\n\n    def rvs(self, n):\n        return np.random.rand(n)  # A WRONG implementation\n\n    def __str__(self):\n        return 'DGPLVM_prior_Raq'\n\n\n# ******************************************\n\nfrom GPy.core import Parameterized\nfrom GPy.core import Param\n\nclass DGPLVM_Lamda(Prior, Parameterized):\n    \"\"\"\n    Implementation of the Discriminative Gaussian Process Latent Variable model paper, by Raquel.\n\n    :param sigma2: constant\n\n    .. Note:: DGPLVM for Classification paper implementation\n\n    \"\"\"\n    domain = _REAL\n    # _instances = []\n    # def __new__(cls, mu, sigma): # Singleton:\n    #     if cls._instances:\n    #         cls._instances[:] = [instance for instance in cls._instances if instance()]\n    #         for instance in cls._instances:\n    #             if instance().mu == mu and instance().sigma == sigma:\n    #                 return instance()\n    #     o = super(Prior, cls).__new__(cls, mu, sigma)\n    #     cls._instances.append(weakref.ref(o))\n    #     return cls._instances[-1]()\n\n    def __init__(self, sigma2, lbl, x_shape, lamda, name='DP_prior'):\n        super(DGPLVM_Lamda, self).__init__(name=name)\n        self.sigma2 = sigma2\n        # self.x = x\n        self.lbl = lbl\n        self.lamda = lamda\n        self.classnum = lbl.shape[1]\n        self.datanum = lbl.shape[0]\n        self.x_shape = x_shape\n        self.dim = x_shape[1]\n        self.lamda = Param('lamda', np.diag(lamda))\n        self.link_parameter(self.lamda)\n\n    def get_class_label(self, y):\n        for idx, v in enumerate(y):\n            if v == 1:\n                return idx\n        return -1\n\n    # This function assigns each data point to its own class\n    # and returns the dictionary which contains the class name and parameters.\n    def compute_cls(self, x):\n        cls = {}\n        # Appending each data point to its proper class\n        for j in range(self.datanum):\n            class_label = self.get_class_label(self.lbl[j])\n            if class_label not in cls:\n                cls[class_label] = []\n            cls[class_label].append(x[j])\n        return cls\n\n    # This function computes mean of each class. The mean is calculated through each dimension\n    def compute_Mi(self, cls):\n        M_i = np.zeros((self.classnum, self.dim))\n        for i in cls:\n            # Mean of each class\n            class_i = cls[i]\n            M_i[i] = np.mean(class_i, axis=0)\n        return M_i\n\n    # Adding data points as tuple to the dictionary so that we can access indices\n    def compute_indices(self, x):\n        data_idx = {}\n        for j in range(self.datanum):\n            class_label = self.get_class_label(self.lbl[j])\n            if class_label not in data_idx:\n                data_idx[class_label] = []\n            t = (j, x[j])\n            data_idx[class_label].append(t)\n        return data_idx\n\n    # Adding indices to the list so we can access whole the indices\n    def compute_listIndices(self, data_idx):\n        lst_idx = []\n        lst_idx_all = []\n        for i in data_idx:\n            if len(lst_idx) == 0:\n                pass\n                #Do nothing, because it is the first time list is created so is empty\n            else:\n                lst_idx = []\n            # Here we put indices of each class in to the list called lst_idx_all\n            for m in range(len(data_idx[i])):\n                lst_idx.append(data_idx[i][m][0])\n            lst_idx_all.append(lst_idx)\n        return lst_idx_all\n\n    # This function calculates between classes variances\n    def compute_Sb(self, cls, M_i, M_0):\n        Sb = np.zeros((self.dim, self.dim))\n        for i in cls:\n            B = (M_i[i] - M_0).reshape(self.dim, 1)\n            B_trans = B.transpose()\n            Sb += (float(len(cls[i])) / self.datanum) * B.dot(B_trans)\n        return Sb\n\n    # This function calculates within classes variances\n    def compute_Sw(self, cls, M_i):\n        Sw = np.zeros((self.dim, self.dim))\n        for i in cls:\n            N_i = float(len(cls[i]))\n            W_WT = np.zeros((self.dim, self.dim))\n            for xk in cls[i]:\n                W = (xk - M_i[i])\n                W_WT += np.outer(W, W)\n            Sw += (N_i / self.datanum) * ((1. / N_i) * W_WT)\n        return Sw\n\n    # Calculating beta and Bi for Sb\n    def compute_sig_beta_Bi(self, data_idx, M_i, M_0, lst_idx_all):\n        # import pdb\n        # pdb.set_trace()\n        B_i = np.zeros((self.classnum, self.dim))\n        Sig_beta_B_i_all = np.zeros((self.datanum, self.dim))\n        for i in data_idx:\n            # pdb.set_trace()\n            # Calculating Bi\n            B_i[i] = (M_i[i] - M_0).reshape(1, self.dim)\n        for k in range(self.datanum):\n            for i in data_idx:\n                N_i = float(len(data_idx[i]))\n                if k in lst_idx_all[i]:\n                    beta = (float(1) / N_i) - (float(1) / self.datanum)\n                    Sig_beta_B_i_all[k] += float(N_i) / self.datanum * (beta * B_i[i])\n                else:\n                    beta = -(float(1) / self.datanum)\n                    Sig_beta_B_i_all[k] += float(N_i) / self.datanum * (beta * B_i[i])\n        Sig_beta_B_i_all = Sig_beta_B_i_all.transpose()\n        return Sig_beta_B_i_all\n\n\n    # Calculating W_j s separately so we can access all the W_j s anytime\n    def compute_wj(self, data_idx, M_i):\n        W_i = np.zeros((self.datanum, self.dim))\n        for i in data_idx:\n            N_i = float(len(data_idx[i]))\n            for tpl in data_idx[i]:\n                xj = tpl[1]\n                j = tpl[0]\n                W_i[j] = (xj - M_i[i])\n        return W_i\n\n    # Calculating alpha and Wj for Sw\n    def compute_sig_alpha_W(self, data_idx, lst_idx_all, W_i):\n        Sig_alpha_W_i = np.zeros((self.datanum, self.dim))\n        for i in data_idx:\n            N_i = float(len(data_idx[i]))\n            for tpl in data_idx[i]:\n                k = tpl[0]\n                for j in lst_idx_all[i]:\n                    if k == j:\n                        alpha = 1 - (float(1) / N_i)\n                        Sig_alpha_W_i[k] += (alpha * W_i[j])\n                    else:\n                        alpha = 0 - (float(1) / N_i)\n                        Sig_alpha_W_i[k] += (alpha * W_i[j])\n        Sig_alpha_W_i = (1. / self.datanum) * np.transpose(Sig_alpha_W_i)\n        return Sig_alpha_W_i\n\n    # This function calculates log of our prior\n    def lnpdf(self, x):\n        x = x.reshape(self.x_shape)\n\n        #!!!!!!!!!!!!!!!!!!!!!!!!!!!\n        #self.lamda.values[:] = self.lamda.values/self.lamda.values.sum()\n\n        xprime = x.dot(np.diagflat(self.lamda))\n        x = xprime\n        # print x\n        cls = self.compute_cls(x)\n        M_0 = np.mean(x, axis=0)\n        M_i = self.compute_Mi(cls)\n        Sb = self.compute_Sb(cls, M_i, M_0)\n        Sw = self.compute_Sw(cls, M_i)\n        # Sb_inv_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))\n        #Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)\n        #Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.5))[0]\n        Sb_inv_N = pdinv(Sb + np.eye(Sb.shape[0])*0.9)[0]\n        return (-1 / self.sigma2) * np.trace(Sb_inv_N.dot(Sw))\n\n    # This function calculates derivative of the log of prior function\n    def lnpdf_grad(self, x):\n        x = x.reshape(self.x_shape)\n        xprime = x.dot(np.diagflat(self.lamda))\n        x = xprime\n        # print x\n        cls = self.compute_cls(x)\n        M_0 = np.mean(x, axis=0)\n        M_i = self.compute_Mi(cls)\n        Sb = self.compute_Sb(cls, M_i, M_0)\n        Sw = self.compute_Sw(cls, M_i)\n        data_idx = self.compute_indices(x)\n        lst_idx_all = self.compute_listIndices(data_idx)\n        Sig_beta_B_i_all = self.compute_sig_beta_Bi(data_idx, M_i, M_0, lst_idx_all)\n        W_i = self.compute_wj(data_idx, M_i)\n        Sig_alpha_W_i = self.compute_sig_alpha_W(data_idx, lst_idx_all, W_i)\n\n        # Calculating inverse of Sb and its transpose and minus\n        # Sb_inv_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))\n        #Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)\n        #Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.5))[0]\n        Sb_inv_N = pdinv(Sb + np.eye(Sb.shape[0])*0.9)[0]\n        Sb_inv_N_trans = np.transpose(Sb_inv_N)\n        Sb_inv_N_trans_minus = -1 * Sb_inv_N_trans\n        Sw_trans = np.transpose(Sw)\n\n        # Calculating DJ/DXk\n        DJ_Dxk = 2 * (\n            Sb_inv_N_trans_minus.dot(Sw_trans).dot(Sb_inv_N_trans).dot(Sig_beta_B_i_all) + Sb_inv_N_trans.dot(\n                Sig_alpha_W_i))\n        # Calculating derivative of the log of the prior\n        DPx_Dx = ((-1 / self.sigma2) * DJ_Dxk)\n\n        DPxprim_Dx = np.diagflat(self.lamda).dot(DPx_Dx)\n\n        # Because of the GPy we need to transpose our matrix so that it gets the same shape as out matrix (denominator layout!!!)\n        DPxprim_Dx = DPxprim_Dx.T\n\n        DPxprim_Dlamda = DPx_Dx.dot(x)\n\n        # Because of the GPy we need to transpose our matrix so that it gets the same shape as out matrix (denominator layout!!!)\n        DPxprim_Dlamda = DPxprim_Dlamda.T\n\n        self.lamda.gradient = np.diag(DPxprim_Dlamda)\n        # print DPxprim_Dx\n        return DPxprim_Dx\n\n\n    # def frb(self, x):\n    #     from functools import partial\n    #     from GPy.models import GradientChecker\n    #     f = partial(self.lnpdf)\n    #     df = partial(self.lnpdf_grad)\n    #     grad = GradientChecker(f, df, x, 'X')\n    #     grad.checkgrad(verbose=1)\n\n    def rvs(self, n):\n        return np.random.rand(n)  # A WRONG implementation\n\n    def __str__(self):\n        return 'DGPLVM_prior_Raq_Lamda'\n\n# ******************************************\n\nclass DGPLVM_T(Prior):\n    \"\"\"\n    Implementation of the Discriminative Gaussian Process Latent Variable model paper, by Raquel.\n\n    :param sigma2: constant\n\n    .. Note:: DGPLVM for Classification paper implementation\n\n    \"\"\"\n    domain = _REAL\n    # _instances = []\n    # def __new__(cls, mu, sigma): # Singleton:\n    #     if cls._instances:\n    #         cls._instances[:] = [instance for instance in cls._instances if instance()]\n    #         for instance in cls._instances:\n    #             if instance().mu == mu and instance().sigma == sigma:\n    #                 return instance()\n    #     o = super(Prior, cls).__new__(cls, mu, sigma)\n    #     cls._instances.append(weakref.ref(o))\n    #     return cls._instances[-1]()\n\n    def __init__(self, sigma2, lbl, x_shape, vec):\n        self.sigma2 = sigma2\n        # self.x = x\n        self.lbl = lbl\n        self.classnum = lbl.shape[1]\n        self.datanum = lbl.shape[0]\n        self.x_shape = x_shape\n        self.dim = x_shape[1]\n        self.vec = vec\n\n\n    def get_class_label(self, y):\n        for idx, v in enumerate(y):\n            if v == 1:\n                return idx\n        return -1\n\n    # This function assigns each data point to its own class\n    # and returns the dictionary which contains the class name and parameters.\n    def compute_cls(self, x):\n        cls = {}\n        # Appending each data point to its proper class\n        for j in range(self.datanum):\n            class_label = self.get_class_label(self.lbl[j])\n            if class_label not in cls:\n                cls[class_label] = []\n            cls[class_label].append(x[j])\n        return cls\n\n    # This function computes mean of each class. The mean is calculated through each dimension\n    def compute_Mi(self, cls):\n        M_i = np.zeros((self.classnum, self.dim))\n        for i in cls:\n            # Mean of each class\n            # class_i = np.multiply(cls[i],vec)\n            class_i = cls[i]\n            M_i[i] = np.mean(class_i, axis=0)\n        return M_i\n\n    # Adding data points as tuple to the dictionary so that we can access indices\n    def compute_indices(self, x):\n        data_idx = {}\n        for j in range(self.datanum):\n            class_label = self.get_class_label(self.lbl[j])\n            if class_label not in data_idx:\n                data_idx[class_label] = []\n            t = (j, x[j])\n            data_idx[class_label].append(t)\n        return data_idx\n\n    # Adding indices to the list so we can access whole the indices\n    def compute_listIndices(self, data_idx):\n        lst_idx = []\n        lst_idx_all = []\n        for i in data_idx:\n            if len(lst_idx) == 0:\n                pass\n                #Do nothing, because it is the first time list is created so is empty\n            else:\n                lst_idx = []\n            # Here we put indices of each class in to the list called lst_idx_all\n            for m in range(len(data_idx[i])):\n                lst_idx.append(data_idx[i][m][0])\n            lst_idx_all.append(lst_idx)\n        return lst_idx_all\n\n    # This function calculates between classes variances\n    def compute_Sb(self, cls, M_i, M_0):\n        Sb = np.zeros((self.dim, self.dim))\n        for i in cls:\n            B = (M_i[i] - M_0).reshape(self.dim, 1)\n            B_trans = B.transpose()\n            Sb += (float(len(cls[i])) / self.datanum) * B.dot(B_trans)\n        return Sb\n\n    # This function calculates within classes variances\n    def compute_Sw(self, cls, M_i):\n        Sw = np.zeros((self.dim, self.dim))\n        for i in cls:\n            N_i = float(len(cls[i]))\n            W_WT = np.zeros((self.dim, self.dim))\n            for xk in cls[i]:\n                W = (xk - M_i[i])\n                W_WT += np.outer(W, W)\n            Sw += (N_i / self.datanum) * ((1. / N_i) * W_WT)\n        return Sw\n\n    # Calculating beta and Bi for Sb\n    def compute_sig_beta_Bi(self, data_idx, M_i, M_0, lst_idx_all):\n        # import pdb\n        # pdb.set_trace()\n        B_i = np.zeros((self.classnum, self.dim))\n        Sig_beta_B_i_all = np.zeros((self.datanum, self.dim))\n        for i in data_idx:\n            # pdb.set_trace()\n            # Calculating Bi\n            B_i[i] = (M_i[i] - M_0).reshape(1, self.dim)\n        for k in range(self.datanum):\n            for i in data_idx:\n                N_i = float(len(data_idx[i]))\n                if k in lst_idx_all[i]:\n                    beta = (float(1) / N_i) - (float(1) / self.datanum)\n                    Sig_beta_B_i_all[k] += float(N_i) / self.datanum * (beta * B_i[i])\n                else:\n                    beta = -(float(1) / self.datanum)\n                    Sig_beta_B_i_all[k] += float(N_i) / self.datanum * (beta * B_i[i])\n        Sig_beta_B_i_all = Sig_beta_B_i_all.transpose()\n        return Sig_beta_B_i_all\n\n\n    # Calculating W_j s separately so we can access all the W_j s anytime\n    def compute_wj(self, data_idx, M_i):\n        W_i = np.zeros((self.datanum, self.dim))\n        for i in data_idx:\n            N_i = float(len(data_idx[i]))\n            for tpl in data_idx[i]:\n                xj = tpl[1]\n                j = tpl[0]\n                W_i[j] = (xj - M_i[i])\n        return W_i\n\n    # Calculating alpha and Wj for Sw\n    def compute_sig_alpha_W(self, data_idx, lst_idx_all, W_i):\n        Sig_alpha_W_i = np.zeros((self.datanum, self.dim))\n        for i in data_idx:\n            N_i = float(len(data_idx[i]))\n            for tpl in data_idx[i]:\n                k = tpl[0]\n                for j in lst_idx_all[i]:\n                    if k == j:\n                        alpha = 1 - (float(1) / N_i)\n                        Sig_alpha_W_i[k] += (alpha * W_i[j])\n                    else:\n                        alpha = 0 - (float(1) / N_i)\n                        Sig_alpha_W_i[k] += (alpha * W_i[j])\n        Sig_alpha_W_i = (1. / self.datanum) * np.transpose(Sig_alpha_W_i)\n        return Sig_alpha_W_i\n\n    # This function calculates log of our prior\n    def lnpdf(self, x):\n        x = x.reshape(self.x_shape)\n        xprim = x.dot(self.vec)\n        x = xprim\n        # print x\n        cls = self.compute_cls(x)\n        M_0 = np.mean(x, axis=0)\n        M_i = self.compute_Mi(cls)\n        Sb = self.compute_Sb(cls, M_i, M_0)\n        Sw = self.compute_Sw(cls, M_i)\n        # Sb_inv_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))\n        #Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)\n        #print 'SB_inv: ', Sb_inv_N\n        #Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))[0]\n        Sb_inv_N = pdinv(Sb+np.eye(Sb.shape[0])*0.1)[0]\n        return (-1 / self.sigma2) * np.trace(Sb_inv_N.dot(Sw))\n\n    # This function calculates derivative of the log of prior function\n    def lnpdf_grad(self, x):\n        x = x.reshape(self.x_shape)\n        xprim = x.dot(self.vec)\n        x = xprim\n        # print x\n        cls = self.compute_cls(x)\n        M_0 = np.mean(x, axis=0)\n        M_i = self.compute_Mi(cls)\n        Sb = self.compute_Sb(cls, M_i, M_0)\n        Sw = self.compute_Sw(cls, M_i)\n        data_idx = self.compute_indices(x)\n        lst_idx_all = self.compute_listIndices(data_idx)\n        Sig_beta_B_i_all = self.compute_sig_beta_Bi(data_idx, M_i, M_0, lst_idx_all)\n        W_i = self.compute_wj(data_idx, M_i)\n        Sig_alpha_W_i = self.compute_sig_alpha_W(data_idx, lst_idx_all, W_i)\n\n        # Calculating inverse of Sb and its transpose and minus\n        # Sb_inv_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))\n        #Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)\n        #print 'SB_inv: ',Sb_inv_N\n        #Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))[0]\n        Sb_inv_N = pdinv(Sb+np.eye(Sb.shape[0])*0.1)[0]\n        Sb_inv_N_trans = np.transpose(Sb_inv_N)\n        Sb_inv_N_trans_minus = -1 * Sb_inv_N_trans\n        Sw_trans = np.transpose(Sw)\n\n        # Calculating DJ/DXk\n        DJ_Dxk = 2 * (\n            Sb_inv_N_trans_minus.dot(Sw_trans).dot(Sb_inv_N_trans).dot(Sig_beta_B_i_all) + Sb_inv_N_trans.dot(\n                Sig_alpha_W_i))\n        # Calculating derivative of the log of the prior\n        DPx_Dx = ((-1 / self.sigma2) * DJ_Dxk)\n        return DPx_Dx.T\n\n    # def frb(self, x):\n    #     from functools import partial\n    #     from GPy.models import GradientChecker\n    #     f = partial(self.lnpdf)\n    #     df = partial(self.lnpdf_grad)\n    #     grad = GradientChecker(f, df, x, 'X')\n    #     grad.checkgrad(verbose=1)\n\n    def rvs(self, n):\n        return np.random.rand(n)  # A WRONG implementation\n\n    def __str__(self):\n        return 'DGPLVM_prior_Raq_TTT'\n\n\n\n\nclass HalfT(Prior):\n    \"\"\"\n    Implementation of the half student t probability function, coupled with random variables.\n\n    :param A: scale parameter\n    :param nu: degrees of freedom\n\n    \"\"\"\n    domain = _POSITIVE\n    _instances = []\n\n    def __new__(cls, A, nu):  # Singleton:\n        if cls._instances:\n            cls._instances[:] = [instance for instance in cls._instances if instance()]\n            for instance in cls._instances:\n                if instance().A == A and instance().nu == nu:\n                    return instance()\n        o = super(Prior, cls).__new__(cls, A, nu)\n        cls._instances.append(weakref.ref(o))\n        return cls._instances[-1]()\n\n    def __init__(self, A, nu):\n        self.A = float(A)\n        self.nu = float(nu)\n        self.constant = gammaln(.5*(self.nu+1.)) - gammaln(.5*self.nu) - .5*np.log(np.pi*self.A*self.nu)\n\n    def __str__(self):\n        return \"hT({:.2g}, {:.2g})\".format(self.A, self.nu)\n\n    def lnpdf(self, theta):\n        return (theta > 0) * (self.constant - .5*(self.nu + 1) * np.log(1. + (1./self.nu) * (theta/self.A)**2))\n\n        # theta = theta if isinstance(theta,np.ndarray) else np.array([theta])\n        # lnpdfs = np.zeros_like(theta)\n        # theta = np.array([theta])\n        # above_zero = theta.flatten()>1e-6\n        # v = self.nu\n        # sigma2=self.A\n        # stop\n        # lnpdfs[above_zero] = (+ gammaln((v + 1) * 0.5)\n        #     - gammaln(v * 0.5)\n        #     - 0.5*np.log(sigma2 * v * np.pi)\n        #     - 0.5*(v + 1)*np.log(1 + (1/np.float(v))*((theta[above_zero][0]**2)/sigma2))\n        # )\n        # return lnpdfs\n\n    def lnpdf_grad(self, theta):\n        theta = theta if isinstance(theta, np.ndarray) else np.array([theta])\n        grad = np.zeros_like(theta)\n        above_zero = theta > 1e-6\n        v = self.nu\n        sigma2 = self.A\n        grad[above_zero] = -0.5*(v+1)*(2*theta[above_zero])/(v*sigma2 + theta[above_zero][0]**2)\n        return grad\n\n    def rvs(self, n):\n        # return np.random.randn(n) * self.sigma + self.mu\n        from scipy.stats import t\n        # [np.abs(x) for x in t.rvs(df=4,loc=0,scale=50, size=10000)])\n        ret = t.rvs(self.nu, loc=0, scale=self.A, size=n)\n        ret[ret < 0] = 0\n        return ret\n\n\nclass Exponential(Prior):\n    \"\"\"\n    Implementation of the Exponential probability function,\n    coupled with random variables.\n\n    :param l: shape parameter\n\n    \"\"\"\n    domain = _POSITIVE\n    _instances = []\n\n    def __new__(cls, l):  # Singleton:\n        if cls._instances:\n            cls._instances[:] = [instance for instance in cls._instances if instance()]\n            for instance in cls._instances:\n                if instance().l == l:\n                    return instance()\n        o = super(Exponential, cls).__new__(cls, l)\n        cls._instances.append(weakref.ref(o))\n        return cls._instances[-1]()\n\n    def __init__(self, l):\n        self.l = l\n\n    def __str__(self):\n        return \"Exp({:.2g})\".format(self.l)\n\n    def summary(self):\n        ret = {\"E[x]\": 1. / self.l,\n               \"E[ln x]\": np.nan,\n               \"var[x]\": 1. / self.l**2,\n               \"Entropy\": 1. - np.log(self.l),\n               \"Mode\": 0.}\n        return ret\n\n    def lnpdf(self, x):\n        return np.log(self.l) - self.l * x\n\n    def lnpdf_grad(self, x):\n        return - self.l\n\n    def rvs(self, n):\n        return np.random.exponential(scale=self.l, size=n)\n\nclass StudentT(Prior):\n    \"\"\"\n    Implementation of the student t probability function, coupled with random variables.\n\n    :param mu: mean\n    :param sigma: standard deviation\n    :param nu: degrees of freedom\n\n    .. Note:: Bishop 2006 notation is used throughout the code\n\n    \"\"\"\n    domain = _REAL\n    _instances = []\n\n    def __new__(cls, mu=0, sigma=1, nu=4):  # Singleton:\n        if cls._instances:\n            cls._instances[:] = [instance for instance in cls._instances if instance()]\n            for instance in cls._instances:\n                if instance().mu == mu and instance().sigma == sigma and instance().nu == nu:\n                    return instance()\n        newfunc = super(Prior, cls).__new__\n        if newfunc is object.__new__:\n            o = newfunc(cls)\n        else:\n            o = newfunc(cls, mu, sigma, nu)\n        cls._instances.append(weakref.ref(o))\n        return cls._instances[-1]()\n\n    def __init__(self, mu, sigma, nu):\n        self.mu = float(mu)\n        self.sigma = float(sigma)\n        self.sigma2 = np.square(self.sigma)\n        self.nu = float(nu)\n\n    def __str__(self):\n        return \"St({:.2g}, {:.2g}, {:.2g})\".format(self.mu, self.sigma, self.nu)\n\n    def lnpdf(self, x):\n        from scipy.stats import t\n        return t.logpdf(x,self.nu,self.mu,self.sigma)\n\n    def lnpdf_grad(self, x):\n        return -(self.nu + 1.)*(x - self.mu)/( self.nu*self.sigma2 + np.square(x - self.mu) )\n\n    def rvs(self, n):\n        from scipy.stats import t\n        ret = t.rvs(self.nu, loc=self.mu, scale=self.sigma, size=n)\n        return ret\n\n"
  },
  {
    "path": "transopt/utils/Read.py",
    "content": "import os\nimport pandas as pd\nimport requests\n\n\ndef read_file(file_path)->pd.DataFrame:\n    _, file_extension = os.path.splitext(file_path)\n\n    if file_extension:\n        # Determine and read based on file extension\n        if file_extension == '.json':\n            return pd.read_json(file_path)\n        elif file_extension == '.txt':\n            return pd.read_csv(file_path, sep='\\t')  # Adjust delimiter as needed\n        elif file_extension == '.csv':\n            df = pd.read_csv(file_path)\n            unnamed_columns = [col for col in df.columns if \"--unlimited\" in col]\n            df.drop(unnamed_columns, axis=1, inplace=True)\n            return df\n        elif file_extension in ['.xls', '.xlsx']:\n            return pd.read_excel(file_path)\n        else:\n            raise ValueError(f\"Unsupported file type: {file_extension}\")\n    else:\n        # No file extension, attempt different methods to read\n        try:\n            return pd.read_csv(file_path)\n        except:\n            pass  # Continue trying if CSV fails\n        try:\n            return pd.read_excel(file_path)\n        except:\n            pass  # Continue trying if Excel fails\n        try:\n            return pd.read_json(file_path)\n        except:\n            pass  # Continue trying if JSON fails\n\n        try:\n            return pd.read_csv(file_path, sep='\\t')  # Assuming it might be a TXT file\n        except:\n            pass  # Continue trying if TXT fails\n\n        raise ValueError(\"File could not be read with any method. Ensure the file format is correct.\")\n\n\ndef read_url(url):\n    # 定义UCI和OpenML的URL模式\n    uci_pattern = \"archive.ics.uci.edu\"\n    openml_pattern = \"openml.org\"\n\n    # 初始化数据集来源\n    data_source = None\n\n    # 尝试从URL下载数据\n    try:\n        response = requests.get(url)\n        data = response.text\n\n        # 检测URL是否指向UCI或OpenML\n        if uci_pattern in url:\n            data_source = \"UCI\"\n        elif openml_pattern in url:\n            data_source = \"OpenML\"\n\n        # 返回数据和数据来源信息\n        return data, data_source\n\n    except requests.RequestException as e:\n        return None, data_source\n"
  },
  {
    "path": "transopt/utils/Visualization.py",
    "content": "import os\nimport warnings\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nfrom itertools import product\n\nos.environ[\"KMP_DUPLICATE_LIB_OK\"] = \"TRUE.\"\n\nfrom benchmark.synthetic import synthetic_problems\nfrom transopt.utils.serialization import ndarray_to_vectors, vectors_to_ndarray\nfrom transopt.utils.Normalization import normalize\n\n\ndef visual_contour(\n    optimizer,\n    testsuites,\n    train_x,\n    train_y,\n    Ac_candi,\n    test_size=101,\n    ac_model=None,\n    dtype=np.float64,\n):\n    # Initialize plots\n    f, ax = plt.subplots(2, 2, figsize=(16, 16))\n\n    search_space_info = optimizer.get_spaceinfo(\"search\")\n\n    var_name = [var[\"name\"] for var in search_space_info]\n    search_bound = [(var[\"domain\"][0], var[\"domain\"][1]) for var in search_space_info]\n\n    # optimizers = problem.optimizers\n    xgrid_0, xgrid_1 = np.meshgrid(\n        np.linspace(search_bound[0][0], search_bound[0][1], test_size, dtype=dtype),\n        np.linspace(search_bound[1][0], search_bound[1][1], test_size, dtype=dtype),\n    )\n    test_x = np.concatenate(\n        (\n            xgrid_0.reshape((xgrid_0.shape[0] * xgrid_0.shape[1], 1)),\n            xgrid_1.reshape((xgrid_0.shape[0] * xgrid_0.shape[1], 1)),\n        ),\n        axis=1,\n    )\n\n    test_vec = ndarray_to_vectors(var_name, test_x)\n\n    observed_pred_y, observed_corv = optimizer.predict(test_x)\n    observed_pred_y = observed_pred_y.reshape(xgrid_0.shape)\n    observed_corv = observed_corv.reshape(xgrid_0.shape)\n\n    # Calculate the true value\n    test_x_design = [optimizer._to_designspace(v) for v in test_vec]\n    testsuites.lock()\n    test_y = testsuites.f(test_x_design)\n    test_y = [y[\"function_value\"] for y in test_y]\n\n    mean = np.mean(train_y)\n    std = np.std(train_y)\n    test_y = normalize(test_y, mean, std)\n    test_y = np.array(test_y).reshape(xgrid_0.shape)\n\n    # Calculate EI for the problem\n    if ac_model is not None:\n        test_ei = ac_model._compute_acq(test_x)\n        test_ei = test_ei.reshape(xgrid_0.shape)\n\n    candidate = optimizer._to_searchspace(Ac_candi[0])\n    candidate = [v for x, v in candidate.items()]\n\n    def ax_plot(title, ax, train_x, plot_y, test_size, cmap):\n        ax.plot(train_x[:, 0], train_x[:, 1], \"k*\")\n        # Predictive mean as blue line\n        with warnings.catch_warnings():\n            warnings.simplefilter(\"ignore\")\n            h1 = ax.contourf(\n                xgrid_0,\n                xgrid_1,\n                plot_y,\n                np.arange(-3, 3.5, 0.5),\n                cmap=cmap,\n            )\n            c1 = plt.colorbar(h1, ax=ax)\n            # ax.clabel(C, inline=True)\n            min_loc_1 = (\n                int(np.argmin(plot_y) / test_size),\n                np.remainder(np.argmin(plot_y), test_size),\n            )\n            ax.plot(xgrid_0[min_loc_1], xgrid_1[min_loc_1], \"b*\")\n\n        ax.set_xlim([-1, 1])\n        ax.set_title(title)\n\n    # PLot true contour in the left plot\n    ax_plot(\n        \"iter_\" + str(train_x.shape[0]),\n        ax[0][0],\n        train_x,\n        test_y.reshape(xgrid_0.shape),\n        test_size,\n        cm.Reds,\n    )\n\n    ax_plot(\n        \"Prediction\",\n        ax[0][1],\n        train_x,\n        observed_pred_y.reshape(xgrid_0.shape),\n        test_size,\n        cm.Blues,\n    )\n\n    def ax_plot_ei(title, ax, train_x, plot_ei, candidate, cmap):\n        # Predictive mean as blue line\n        h1 = ax.contourf(xgrid_0, xgrid_1, plot_ei, np.arange(-3, 3.5, 0.5), cmap=cmap)\n        c1 = plt.colorbar(h1, ax=ax)\n        max_loc = (\n            int(np.argmax(plot_ei) / test_size),\n            np.remainder(np.argmax(plot_ei), test_size),\n        )\n        ax.plot(xgrid_0[max_loc], xgrid_1[max_loc], \"g*\")\n        ax.plot(candidate[0], candidate[1], color=\"orange\", marker=\"*\", linewidth=0)\n        ax.set_title(title)\n\n    if ac_model is not None:\n        ax_plot_ei(\n            \"Acquisition Function\", ax[1][1], train_x, test_ei, candidate, cm.Greens\n        )\n\n    # PLot covariance contour in the last row\n    ax_plot(\n        \"Prediction covariance\", ax[1][0], train_x, observed_corv, test_size, cm.Blues\n    )\n\n    plt.grid()\n\n    Exper_folder = optimizer.exp_path\n    if not os.path.exists(\n        \"{}/verbose/contour/{}/{}\".format(\n            Exper_folder, optimizer.optimizer_name, f\"{testsuites.get_curname()}\"\n        )\n    ):\n        os.makedirs(\n            \"{}/verbose/contour/{}/{}\".format(\n                Exper_folder, optimizer.optimizer_name, f\"{testsuites.get_curname()}\"\n            )\n        )\n\n    plt.savefig(\n        \"{}/verbose/contour/{}/{}/{}.png\".format(\n            Exper_folder,\n            optimizer.optimizer_name,\n            f\"{testsuites.get_curname()}\",\n            f\"iter_{testsuites.get_query_num()}\",\n        ),\n        format=\"png\",\n    )\n    plt.close()\n    testsuites.unlock()\n\n\ndef visual_oned(\n    optimizer, testsuites, train_x, train_y, Ac_candi, ac_model=None, dtype=np.float64\n):\n    # Initialize plots\n    f, ax = plt.subplots(1, 1, figsize=(8, 8))\n\n    search_space_info = optimizer.get_spaceinfo(\"search\")\n\n    var_name = [var[\"name\"] for var in search_space_info]\n    search_bound = [\n        search_space_info[0][\"domain\"][0],\n        search_space_info[0][\"domain\"][1],\n    ]\n    test_x = np.arange(search_bound[0], search_bound[1] + 0.005, 0.005, dtype=dtype)\n\n    observed_pred_y, observed_corv = optimizer.predict(test_x[:, np.newaxis])\n    test_vec = ndarray_to_vectors(var_name, test_x[:, np.newaxis])\n    # Calculate the true value\n    test_x_design = [optimizer._to_designspace(v) for v in test_vec]\n    testsuites.lock()\n    test_y = testsuites.f(test_x_design)\n    test_y = np.array([y[\"function_value\"] for y in test_y])\n\n    y_mean = np.mean(train_y)\n    y_std = np.std(train_y)\n    test_y = normalize(test_y, y_mean, y_std)\n    train_y_temp = normalize(train_y, y_mean, y_std)\n\n    # Calculate EI for the problem\n    if ac_model is not None:\n        test_ei = ac_model._compute_acq(test_x[:, np.newaxis])\n\n    pre_mean = observed_pred_y\n    pre_best_y = np.min(pre_mean)\n    pre_best_x = test_x[np.argmin(pre_mean)]\n    pre_up = observed_pred_y + observed_corv\n    pre_low = observed_pred_y - observed_corv\n\n    ax.plot(test_x, test_y, \"r-\", linewidth=1, alpha=1)\n    ax.plot(test_x, pre_mean[:, 0], \"b-\", linewidth=1, alpha=1)\n    if ac_model is not None:\n        ax.plot(test_x, test_ei[:, 0], \"g-\", linewidth=1, alpha=1)\n\n    candidate = optimizer._to_searchspace(Ac_candi[0])\n    ax.plot(train_x[:, 0], train_y_temp[:, 0], marker=\"*\", color=\"black\", linewidth=0)\n    ax.plot(candidate[var_name[0]], 0, marker=\"*\", color=\"orange\", linewidth=0)\n    ax.plot(pre_best_x, pre_best_y, marker=\"*\", color=\"blue\", linewidth=0)\n    ax.fill_between(test_x, pre_up[:, 0], pre_low[:, 0], alpha=0.2, facecolor=\"blue\")\n\n    Exper_folder = optimizer.exp_path\n    if not os.path.exists(\n        \"{}/verbose/oneD/{}/{}\".format(\n            Exper_folder, optimizer.optimizer_name, f\"{testsuites.get_curname()}\"\n        )\n    ):\n        os.makedirs(\n            \"{}/verbose/oneD/{}/{}\".format(\n                Exper_folder, optimizer.optimizer_name, f\"{testsuites.get_curname()}\"\n            )\n        )\n\n    ax.legend()\n    plt.grid()\n\n    plt.savefig(\n        \"{}/verbose/oneD/{}/{}/{}.png\".format(\n            Exper_folder,\n            optimizer.optimizer_name,\n            f\"{testsuites.get_curname()}\",\n            f\"iter_{testsuites.get_query_num()}\",\n            format=\"png\",\n        )\n    )\n\n    os.makedirs(\n        \"{}/verbose/oneD/{}/{}/\".format(\n            Exper_folder, optimizer.optimizer_name, f\"{testsuites.get_curname()}\"\n        ),\n        exist_ok=True,\n    )\n    np.savetxt(\n        \"{}/verbose/oneD/{}/{}/{}_true.txt\".format(\n            Exper_folder,\n            optimizer.optimizer_name,\n            f\"{testsuites.get_curname()}\",\n            f\"{testsuites.get_query_num()}\",\n        ),\n        np.concatenate((test_x[:, np.newaxis], test_y[:, np.newaxis]), axis=1),\n    )\n    np.savetxt(\n        \"{}/verbose/oneD/{}/{}/{}_pred_y.txt\".format(\n            Exper_folder,\n            optimizer.optimizer_name,\n            f\"{testsuites.get_curname()}\",\n            f\"{testsuites.get_query_num()}\",\n        ),\n        np.concatenate((test_x[:, np.newaxis], observed_pred_y), axis=1),\n    )\n    np.savetxt(\n        \"{}/verbose/oneD/{}/{}/{}_cov_lower.txt\".format(\n            Exper_folder,\n            optimizer.optimizer_name,\n            f\"{testsuites.get_curname()}\",\n            f\"{testsuites.get_query_num()}\",\n        ),\n        np.concatenate(\n            (test_x[:, np.newaxis], observed_pred_y - observed_corv), axis=1\n        ),\n    )\n    np.savetxt(\n        \"{}/verbose/oneD/{}/{}/{}_cov_higher.txt\".format(\n            Exper_folder,\n            optimizer.optimizer_name,\n            f\"{testsuites.get_curname()}\",\n            f\"{testsuites.get_query_num()}\",\n        ),\n        np.concatenate(\n            (test_x[:, np.newaxis], observed_pred_y + observed_corv), axis=1\n        ),\n    )\n    if ac_model is not None:\n        np.savetxt(\n            \"{}/verbose/oneD/{}/{}/{}_ei.txt\".format(\n                Exper_folder,\n                optimizer.optimizer_name,\n                f\"{testsuites.get_curname()}\",\n                f\"{testsuites.get_query_num()}\",\n            ),\n            np.concatenate((test_x[:, np.newaxis], test_ei), axis=1),\n        )\n\n    np.savetxt(\n        \"{}/verbose/oneD/{}/{}/{}_train.txt\".format(\n            Exper_folder,\n            optimizer.optimizer_name,\n            f\"{testsuites.get_curname()}\",\n            f\"{testsuites.get_query_num()}\",\n        ),\n        np.concatenate((train_x, train_y_temp), axis=1),\n    )\n\n    plt.close()\n    testsuites.unlock()\n\n\ndef visual_pf(\n    optimizer, testsuites, train_x, train_y, Ac_candi, ac_model=None, dtype=np.float64\n):\n    f, ax = plt.subplots(1, 1, figsize=(8, 8))\n\n    search_space_info = optimizer.get_spaceinfo(\"search\")\n\n    final_pfront = pareto.find_pareto_only_y(obs_points_dic[\"ParEGO\"])\n    pfront_sorted = final_pfront[final_pfront[:, 0].argsort(), :]\n    plt.scatter(pfront_sorted[:, 0], pfront_sorted[:, 1], c=\"r\", label=\"ParEGO\")\n    plt.vlines(pfront_sorted[0, 0], ymin=pfront_sorted[0, 1], ymax=w_ref[1], colors=\"r\")\n    for i in range(pfront_sorted.shape[0] - 1):\n        plt.hlines(\n            y=pfront_sorted[i, 1],\n            xmin=pfront_sorted[i, 0],\n            xmax=pfront_sorted[i + 1, 0],\n            colors=\"r\",\n        )\n        plt.vlines(\n            x=pfront_sorted[i + 1, 0],\n            ymin=pfront_sorted[i + 1, 1],\n            ymax=pfront_sorted[i, 1],\n            colors=\"r\",\n        )\n    plt.hlines(\n        y=pfront_sorted[-1, 1], xmin=pfront_sorted[-1, 0], xmax=w_ref[0], colors=\"r\"\n    )\n\n    var_name = [var[\"name\"] for var in search_space_info]\n    search_bound = [\n        search_space_info[0][\"domain\"][0],\n        search_space_info[0][\"domain\"][1],\n    ]\n    test_x = np.arange(search_bound[0], search_bound[1] + 0.005, 0.005, dtype=dtype)\n\n    observed_pred_y, observed_corv = optimizer.predict(test_x[:, np.newaxis])\n    test_vec = ndarray_to_vectors(var_name, test_x[:, np.newaxis])\n    # Calculate the true value\n    test_x_design = [optimizer._to_designspace(v) for v in test_vec]\n    testsuites.lock()\n    test_y = testsuites.f(test_x_design)\n    test_y = np.array([y[\"function_value\"] for y in test_y])\n\n    y_mean = np.mean(train_y)\n    y_std = np.std(train_y)\n    test_y = normalize(test_y, y_mean, y_std)\n    train_y_temp = normalize(train_y, y_mean, y_std)\n"
  },
  {
    "path": "transopt/utils/__init__.py",
    "content": ""
  },
  {
    "path": "transopt/utils/check.py",
    "content": "import os\nimport re\nimport requests\nimport ipaddress\nfrom urllib.parse import urlparse\n\n\ndef  check_dir(self):\n    # Validate path\n    if self.path and not (os.path.exists(self.path) and os.path.isfile(self.path)):\n        raise ValueError(\"Provided path is not a valid file\")\n\n\ndef check_url(url):\n    try:\n        result = urlparse(url)\n        return all([result.scheme, result.netloc])\n    except:\n        return False\n\n\ndef check_ip_address(ip_address):\n    try:\n        ipaddress.ip_address(ip_address)\n        return True\n    except ValueError:\n        return False"
  },
  {
    "path": "transopt/utils/encoding.py",
    "content": "import pandas as pds\n\ndef target_encoding(df:pds.DataFrame, column_name, target_name):\n    \"\"\"\n    计算给定列的目标编码。\n\n    参数:\n    dataframe (pandas.DataFrame): 包含特征和目标列的DataFrame。\n    column_name (str): 需要进行目标编码的列名。\n    target_name (str): 目标变量的列名。\n\n    返回:\n    dict: 包含每个唯一值及其目标编码的字典。\n    \"\"\"\n    # 计算每个唯一值的目标均值\n    target_mean = df.groupby(column_name)[target_name].mean()\n    target_rank = target_mean.rank(method='average')\n\n    df[f'mean_encoding'] = df.groupby(column_name)[target_name].transform('mean')\n    df[f'rank_encoding'] = df[column_name].map(target_rank)\n    # target_rank = target_mean.rank\n    print(df[[column_name, target_name, f'mean_encoding', f'rank_encoding']].head(10))\n\n    encodings = {value: key for key, value in target_rank.to_dict().items()}\n\n    # 返回结果字典\n    return encodings\n\ndef multitarget_encoding(df:pds.DataFrame, column_name, target_names):\n    encodings = {}\n    for target in target_names:\n        # 计算每个唯一值的目标均值\n        target_mean = df.groupby(column_name)[target].mean()\n        encodings[target] = target_mean.to_dict()\n    return encodings"
  },
  {
    "path": "transopt/utils/hypervolume.py",
    "content": "\nimport numpy as np\nimport itertools as it\n\ndef find_pareto(X, y):\n    \"\"\"\n    find pareto set in X and pareto frontier in y\n\n    Paremeters\n    ----------\n    X : numpy.array\n        input data\n    y : numpy.array\n        output data\n\n    Return\n    ------\n    pareto_front : numpy.array\n        pareto frontier in y\n    pareto_set : numpy.array\n        pareto set in X\n    \"\"\"\n    y_copy = np.copy(y)\n    pareto_front = np.zeros((0 ,y.shape[1]))\n    pareto_set = np.zeros((0 ,X.shape[1]))\n    i = 0\n    j = 0\n    while i < y_copy.shape[0]:\n        y_outi = np.delete(y_copy, i, axis  =0)\n        # paretoだったら全部false\n        flag = np.all(y_outi <= y_copy[i ,:] ,axis = 1)\n        if not np.any(flag):\n            pareto_front = np.append(pareto_front, [y_copy[i ,:]] ,axis = 0)\n            pareto_set = np.append(pareto_set, [X[j ,:]] ,axis = 0)\n            i += 1\n        else :\n            y_copy = np.delete(y_copy, i, axis= 0)\n        j += 1\n    return pareto_front, pareto_set\n\ndef find_pareto_only_y(y):\n    \"\"\"\n    obtain only pareto frontier in y\n\n    Parameters\n    ----------\n    y : numpy.array\n        output data\n\n    Returns\n    -------\n    pareto_front : numpy.array\n        pareto frontier in y\n    \"\"\"\n    y_copy = np.copy(y)\n    pareto_front = np.zeros((0 ,y.shape[1]))\n    i = 0\n\n    while i < y_copy.shape[0]:\n        y_outi = np.delete(y_copy, i, axis  =0)\n        # paretoだったら全部false\n        flag = np.all(y_outi <= y_copy[i ,:] ,axis = 1)\n        if not np.any(flag):\n            pareto_front = np.append(pareto_front, [y_copy[i ,:]] ,axis = 0)\n            i += 1\n        else :\n            y_copy = np.delete(y_copy, i, axis= 0)\n    return pareto_front\n\n\ndef create_cells(pf, ref, ref_inv=None):\n    '''\n       从N个帕累托前沿创建被帕累托前沿支配的区域的独立单元格数组（最小化目标）。\n\n       参数\n       ----\n       pf : numpy array\n           帕累托前沿（N \\times L）\n       ref : numpy array\n           界定目标上界的参考点（L）\n       ref_inv : numpy array\n           界定目标下界的参考点（L）（为方便计算）\n\n       返回\n       ----\n       lower : numpy array\n           帕累托前沿截断区域中M个单元格的下界（M \\times L）\n       upper : numpy array\n           帕累托前沿截断区域中M个单元格的上界（M \\times L）\n       '''\n    N, L = np.shape(pf)\n\n    if ref_inv is None:\n        ref_inv = np.min(pf, axis=0)\n\n    if N == 1:\n        # 1つの場合そのまま返してよし\n        return np.atleast_2d(pf), np.atleast_2d(ref)\n    else:\n        # refと作る超体積が最も大きいものをpivotとする\n        hv = np.prod(pf - ref, axis=1)\n        pivot_index = np.argmax(hv)\n        pivot = pf[pivot_index]\n        # print('pivot :', pivot)\n\n        # pivotはそのままcellになる\n        lower = np.atleast_2d(pivot)\n        upper = np.atleast_2d(ref)\n\n        # 2^Lの全組み合わせに対して再帰を回す\n        for i in it.product(range(2), repeat=L):\n            # 全て1のところにはパレートフロンティアはもう無い\n            # 全て0のところはシンプルなセルになるので上で既に追加済\n            iter_index = np.array(list(i)) == 0\n            if (np.sum(iter_index) == 0) or (np.sum(iter_index) == L):\n                continue\n\n            # 新しい基準点(pivot座標からiの1が立っているところだけref座標に変換)\n            new_ref = pivot.copy()\n            new_ref[iter_index] = ref[iter_index]\n\n            # 新しいlower側の基準点(計算の都合上) (下側基準点座標からiの1が立っているところだけpivot座標に変換)\n            new_ref_inv = ref_inv.copy()\n            new_ref_inv[iter_index] = pivot[iter_index]\n\n            # new_refより全次元で大きいPareto解は残しておく必要あり\n            new_pf = pf[(pf < new_ref).all(axis=1), :]\n            # new_ref_invに支配されていない点はnew_refとnew_ref_invの作る超直方体に射影する\n            new_pf[new_pf < new_ref_inv] = np.tile(new_ref_inv, (new_pf.shape[0], 1))[new_pf < new_ref_inv]\n\n            # 再帰\n            if np.size(new_pf) > 0:\n                child_lower, child_upper = create_cells(new_pf, new_ref, new_ref_inv)\n\n                lower = np.r_[lower, np.atleast_2d(child_lower)]\n                upper = np.r_[upper, np.atleast_2d(child_upper)]\n\n    return lower, upper\n\n\n\ndef find_pareto_from_posterior(X, mean, y):\n    \"\"\"\n    find pareto frontier in predict mean of GPR and pareto set in X\n\n    Parameters\n    ----------\n    X : numpy.array\n        input data\n    mean : numpy.array\n        predict mean of GPR\n    y : numpy.array\n        output data\n\n    Returns\n    -------\n    pareto_front : numpy.array\n        pareto frontier in y defined by predict mean\n    pareto_set : numpy.array\n        pareto set in X\n    \"\"\"\n    mean_copy = np.copy(mean)\n    pareto_front = np.zeros((0 ,mean.shape[1]))\n    pareto_set = np.zeros((0 ,X.shape[1]))\n    i = 0\n    j = 0\n    while i < mean_copy.shape[0]:\n        mean_outi = np.delete(mean_copy, i, axis  =0)\n        # paretoだったら全部false\n        flag = np.all(mean_outi <= mean_copy[i ,:] ,axis = 1)\n        if not np.any(flag):\n            pareto_front = np.append(pareto_front, [y[j ,:]] ,axis = 0)\n            pareto_set = np.append(pareto_set, [X[j ,:]] ,axis = 0)\n            i += 1\n        else :\n            mean_copy = np.delete(mean_copy, i, axis= 0)\n        j += 1\n    return pareto_front, pareto_set\n\n\n\n\ndef calc_hypervolume(y, w_ref):\n    \"\"\"\n    calculate pareto hypervolume\n\n    Parameters\n    ----------\n    y : numpy.array\n        output data\n    w_ref : numpy.array\n        reference point for calculating hypervolume\n\n    Returns\n    -------\n    hypervolume : float\n        pareto hypervolume\n    \"\"\"\n    hypervolume = 0.0e0\n    pareto_front = find_pareto_only_y(y)\n    v, w = create_cells(pareto_front, w_ref)\n\n    if v.ndim == 1:\n        hypervolume = np.prod(w - v)\n    else:\n        hypervolume = np.sum(np.prod(w - v, axis=1))\n    return hypervolume\n\n"
  },
  {
    "path": "transopt/utils/log.py",
    "content": "import logging\n\nfrom rich.logging import RichHandler\n\nloggers = {}\n\nLOGGER_NAME = \"Transopt\"\n\n\ndef get_logger(logger_name: str) -> logging.Logger:\n    # https://rich.readthedocs.io/en/latest/reference/logging.html#rich.logging.RichHandler\n    # https://rich.readthedocs.io/en/latest/logging.html#handle-exceptions\n    if logger_name in loggers:\n        return loggers[logger_name]\n    \n    _logger = logging.getLogger(logger_name) \n    rich_handler = RichHandler(\n        show_time=False,\n        rich_tracebacks=False,\n        show_path=True,\n        tracebacks_show_locals=False,\n    )\n    rich_handler.setFormatter(\n        logging.Formatter(\n            fmt=\"%(message)s\",\n            datefmt=\"[%X]\",\n        )\n    )\n\n    file_handler = logging.FileHandler('application.log') \n    file_handler.setFormatter(\n        logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n    )\n    \n    _logger.handlers.clear()\n    _logger.addHandler(rich_handler)\n    # _logger.addHandler(file_handler)\n    _logger.setLevel(logging.INFO)\n    _logger.propagate = False\n    \n    loggers[logger_name] = _logger\n    return _logger\n\n# logger = logging.getLogger(LOGGER_NAME)\n# logger.setLevel(logging.DEBUG)\n\nlogger = get_logger(LOGGER_NAME)\n"
  },
  {
    "path": "transopt/utils/openml_data_manager.py",
    "content": "\"\"\"\nThis file includes code adapted from HPOBench (https://github.com/automl/HPOBench),\nwhich is licensed under the Apache License 2.0. A copy of the license can be\nfound at http://www.apache.org/licenses/LICENSE-2.0.\n\"\"\"\n\n\n\"\"\" OpenMLDataManager organizing the data for the benchmarks with data from\nOpenML-tasks.\n\nDataManager organizing the download of the data.\nThe load function of a DataManger downloads the data given an unique OpenML\nidentifier. It splits the data in train, test and optional validation splits.\nIt can be distinguished between holdout and cross-validation data sets.\n\nFor Non-OpenML data sets please use the hpobench.util.data_manager.\n\"\"\"\n\nimport os\nimport abc\nimport logging\nimport tarfile\nimport requests\nimport openml\nimport numpy as np\nfrom pathlib import Path\nfrom typing import Tuple, List, Union\nfrom zipfile import ZipFile\nfrom oslo_concurrency import lockutils\nfrom sklearn.model_selection import train_test_split\n\nfrom transopt.utils.rng_helper import get_rng\n\n\n# TODO: 考虑使用 config 模块管理\ndef _check_dir(path: Path):\n    \"\"\" Check whether dir exists and if not create it\"\"\"\n    Path(path).mkdir(exist_ok=True, parents=True)\n\ncache_dir = os.environ.get('OPENML_CACHE_HOME', '~/.cache/transopt')\ndata_dir = os.environ.get('OPENML_DATA_HOME', '~/.local/share/transopt')\ncache_dir = Path(cache_dir).expanduser().absolute()\ndata_dir = Path(data_dir).expanduser().absolute()\n_check_dir(cache_dir)\n_check_dir(data_dir)\n\n\ndef get_openml100_taskids():\n    \"\"\"\n    Return task ids for the OpenML100 data ets\n    See also here: https://www.openml.org/s/14\n    Reference: https://arxiv.org/abs/1708.03731\n    \"\"\"\n    return [\n        258, 259, 261, 262, 266, 267, 271, 273, 275, 279, 283, 288, 2120,\n        2121, 2125, 336, 75093, 75092, 75095, 75097, 75099, 75103, 75107,\n        75106, 75109, 75108, 75112, 75129, 75128, 75135, 146574, 146575,\n        146572, 146573, 146578, 146579, 146576, 146577, 75154, 146582,\n        146583, 75156, 146580, 75159, 146581, 146586, 146587, 146584,\n        146585, 146590, 146591, 146588, 146589, 75169, 146594, 146595,\n        146592, 146593, 146598, 146599, 146596, 146597, 146602, 146603,\n        146600, 146601, 75181, 146604, 146605, 75215, 75217, 75219, 75221,\n        75225, 75227, 75231, 75230, 75232, 75235, 3043, 75236, 75239, 3047,\n        232, 233, 236, 3053, 3054, 3055, 241, 242, 244, 245, 246, 248, 250,\n        251, 252, 253, 254,\n    ]\n\n\ndef get_openmlcc18_taskids():\n    \"\"\"\n    Return task ids for the OpenML-CC18 data sets\n    See also here: https://www.openml.org/s/99\n    TODO: ADD reference\n    \"\"\"\n    return [167149, 167150, 167151, 167152, 167153, 167154, 167155, 167156, 167157,\n            167158, 167159, 167160, 167161, 167162, 167163, 167165, 167166, 167167,\n            167168, 167169, 167170, 167171, 167164, 167173, 167172, 167174, 167175,\n            167176, 167177, 167178, 167179, 167180, 167181, 167182, 126025, 167195,\n            167194, 167190, 167191, 167192, 167193, 167187, 167188, 126026, 167189,\n            167185, 167186, 167183, 167184, 167196, 167198, 126029, 167197, 126030,\n            167199, 126031, 167201, 167205, 189904, 167106, 167105, 189905, 189906,\n            189907, 189908, 189909, 167083, 167203, 167204, 189910, 167202, 167097,\n            ]\n\n\ndef _load_data(task_id: int):\n    \"\"\" Helper-function to load the data from the OpenML website. \"\"\"\n    task = openml.tasks.get_task(task_id)\n\n    try:\n        # This should throw an ValueError!\n        task.get_train_test_split_indices(fold=0, repeat=1)\n        raise AssertionError(f'Task {task_id} has more than one repeat. This '\n                             f'benchmark can only work with a single repeat.')\n    except ValueError:\n        pass\n\n    try:\n        # This should throw an ValueError!\n        task.get_train_test_split_indices(fold=1, repeat=0)\n        raise AssertionError(f'Task {task_id} has more than one fold. This '\n                             f'benchmark can only work with a single fold.')\n    except ValueError:\n        pass\n\n    train_indices, test_indices = task.get_train_test_split_indices()\n\n    X, y = task.get_X_and_y()\n\n    X_train = X[train_indices]\n    y_train = y[train_indices]\n    X_test = X[test_indices]\n    y_test = y[test_indices]\n\n    # TODO replace by more efficient function which only reads in the data\n    # saved in the arff file describing the attributes/features\n    dataset = task.get_dataset()\n    _, _, categorical_indicator, _ = dataset.get_data(target=task.target_name)\n    variable_types = ['categorical' if ci else 'numerical' for ci in categorical_indicator]\n\n    return X_train, y_train, X_test, y_test, variable_types, dataset.name\n\nclass DataManager(abc.ABC, metaclass=abc.ABCMeta):\n    \"\"\" Base Class for loading and managing the data.\n\n    Attributes\n    ----------\n    logger : logging.Logger\n\n    \"\"\"\n\n    def __init__(self):\n        self.logger = logging.getLogger(\"DataManager\")\n\n    @abc.abstractmethod\n    def load(self):\n        \"\"\" Loads data from data directory as defined in\n        config_file.data_directory\n        \"\"\"\n        raise NotImplementedError()\n\n    def create_save_directory(self, save_dir: Path):\n        \"\"\" Helper function. Check if data directory exists. If not, create it.\n\n        Parameters\n        ----------\n        save_dir : Path\n            Path to the directory. where the data should be stored\n        \"\"\"\n        if not save_dir.is_dir():\n            self.logger.debug(f'Create directory {save_dir}')\n            save_dir.mkdir(parents=True, exist_ok=True)\n\n    @lockutils.synchronized('not_thread_process_safe', external=True,\n                            lock_path=f'{cache_dir}/lock_download_file', delay=0.5)\n    def _download_file_with_progressbar(self, data_url: str, data_file: Path):\n        data_file = Path(data_file)\n\n        if data_file.exists():\n            self.logger.info('Data File already exists. Skip downloading.')\n            return\n\n        self.logger.info(f\"Download the file from {data_url} to {data_file}\")\n        data_file.parent.mkdir(parents=True, exist_ok=True)\n\n        from tqdm import tqdm\n        r = requests.get(data_url, stream=True)\n        with open(data_file, 'wb') as f:\n            total_length = int(r.headers.get('content-length'))\n            for chunk in tqdm(r.iter_content(chunk_size=1024),\n                              unit_divisor=1024, unit='kB', total=int(total_length / 1024) + 1):\n                if chunk:\n                    _ = f.write(chunk)\n                    f.flush()\n        self.logger.info(f\"Finished downloading to {data_file}\")\n\n    @lockutils.synchronized('not_thread_process_safe', external=True,\n                            lock_path=f'{cache_dir}/lock_unzip_file', delay=0.5)\n    def _untar_data(self, compressed_file: Path, save_dir: Union[Path, None] = None):\n        self.logger.debug('Extract the compressed data')\n        with tarfile.open(compressed_file, 'r') as fh:\n            if save_dir is None:\n                save_dir = compressed_file.parent\n            fh.extractall(save_dir)\n        self.logger.debug(f'Successfully extracted the data to {save_dir}')\n\n    @lockutils.synchronized('not_thread_process_safe', external=True,\n                            lock_path=f'{cache_dir}/lock_unzip_file', delay=0.5)\n    def _unzip_data(self, compressed_file: Path, save_dir: Union[Path, None] = None):\n        self.logger.debug('Extract the compressed data')\n        with ZipFile(compressed_file, 'r') as fh:\n            if save_dir is None:\n                save_dir = compressed_file.parent\n            fh.extractall(save_dir)\n        self.logger.debug(f'Successfully extracted the data to {save_dir}')\n\nclass HoldoutDataManager(DataManager):\n    \"\"\"  Base Class for loading and managing the Holdout data sets.\n\n    Attributes\n    ----------\n    X_train : np.ndarray\n    y_train : np.ndarray\n    X_valid : np.ndarray\n    y_valid : np.ndarray\n    X_test : np.ndarray\n    y_test : np.ndarray\n    \"\"\"\n\n    def __init__(self):\n        super().__init__()\n\n        self.X_train = None\n        self.y_train = None\n        self.X_valid = None\n        self.y_valid = None\n        self.X_test = None\n        self.y_test = None\n    \n    \nclass CrossvalidationDataManager(DataManager):\n    \"\"\"\n    Base Class for loading and managing the cross-validation data sets.\n\n    Attributes\n    ----------\n    X_train : np.ndarray\n    y_train : np.ndarray\n    X_test : np.ndarray\n    y_test : np.ndarray\n    \"\"\"\n\n    def __init__(self):\n        super().__init__()\n\n        self.X_train = None\n        self.y_train = None\n        self.X_test = None\n        self.y_test = None\n\n\nclass OpenMLHoldoutDataManager(HoldoutDataManager):\n    \"\"\" Base class for loading holdout data set from OpenML.\n\n    Attributes\n    ----------\n    task_id : int\n    rng : np.random.RandomState\n    name : str\n    variable_types : list\n        Indicating the type of each feature in the loaded data\n        (e.g. categorical, numerical)\n\n    Parameters\n    ----------\n    openml_task_id : int\n        Unique identifier for the task on OpenML\n    rng : int, np.random.RandomState, None\n        defines the random state\n    \"\"\"\n\n    def __init__(self, openml_task_id: int, rng: Union[int, np.random.RandomState, None] = None):\n        super(OpenMLHoldoutDataManager, self).__init__()\n\n        self._save_to = data_dir / 'OpenML'\n        self.task_id = openml_task_id\n        self.rng = get_rng(rng=rng)\n        self.name = None\n        self.variable_types = None\n\n        self.create_save_directory(self._save_to)\n\n        openml.config.apikey = '610344db6388d9ba34f6db45a3cf71de'\n        openml.config.set_root_cache_directory(str(self._save_to))\n\n    def load(self) -> Tuple[np.ndarray, np.ndarray, np.ndarray,\n                            np.ndarray, np.ndarray, np.ndarray]:\n        \"\"\"\n        Loads dataset from OpenML in config_file.data_directory.\n        Downloads data if necessary.\n\n        Returns\n        -------\n        X_train: np.ndarray\n        y_train: np.ndarray\n        X_val: np.ndarray\n        y_val: np.ndarray\n        X_test: np.ndarray\n        y_test: np.ndarray\n        \"\"\"\n\n        self.X_train, self.y_train, self.X_test, self.y_test, self.variable_types, self.name = _load_data(self.task_id)\n\n        self.X_train, self.X_valid, self.y_train, self.y_valid = train_test_split(self.X_train,\n                                                                                  self.y_train,\n                                                                                  test_size=0.33,\n                                                                                  stratify=self.y_train,\n                                                                                  random_state=self.rng)\n\n        return self.X_train, self.y_train, self.X_valid, self.y_valid, self.X_test, self.y_test\n\n    @staticmethod\n    def replace_nans_in_cat_columns(X_train: np.ndarray, X_valid: np.ndarray, X_test: np.ndarray,\n                                    is_categorical: Union[np.ndarray, List]) \\\n            -> Tuple[np.ndarray, np.ndarray, np.ndarray, List]:\n        \"\"\" Helper function to replace nan values in categorical features / columns by a non-used value.\n        Here: Min - 1.\n        \"\"\"\n        _cat_data = np.concatenate([X_train, X_valid, X_test], axis=0)\n        nan_index = np.isnan(_cat_data[:, is_categorical])\n        categories = [np.unique(_cat_data[:, i][~nan_index[:, i]])\n                      for i in range(X_train.shape[1]) if is_categorical[i]]\n        replace_nans_with = np.nanmin(_cat_data[:, is_categorical], axis=0) - 1\n\n        categories = [np.concatenate([replace_value.flatten(), cat])\n                      for (replace_value, cat) in zip(replace_nans_with, categories)]\n\n        def _find_and_replace(array, replace_nans_with):\n            nan_idx = np.where(np.isnan(array))\n            array[nan_idx] = np.take(replace_nans_with, nan_idx[1])\n            return array\n\n        X_train[:, is_categorical] = _find_and_replace(X_train[:, is_categorical], replace_nans_with)\n        X_valid[:, is_categorical] = _find_and_replace(X_valid[:, is_categorical], replace_nans_with)\n        X_test[:, is_categorical] = _find_and_replace(X_test[:, is_categorical], replace_nans_with)\n        return X_train, X_valid, X_test, categories\n\n\nclass OpenMLCrossvalidationDataManager(CrossvalidationDataManager):\n    \"\"\" Base class for loading cross-validation data set from OpenML.\n\n    Attributes\n    ----------\n    task_id : int\n    rng : np.random.RandomState\n    name : str\n    variable_types : list\n        Indicating the type of each feature in the loaded data\n        (e.g. categorical, numerical)\n\n    Parameters\n    ----------\n    openml_task_id : int\n        Unique identifier for the task on OpenML\n    rng : int, np.random.RandomState, None\n        defines the random state\n    \"\"\"\n\n    def __init__(self, openml_task_id: int, rng: Union[int, np.random.RandomState, None] = None):\n        super(OpenMLCrossvalidationDataManager, self).__init__()\n\n        self._save_to = data_dir / 'OpenML'\n        self.task_id = openml_task_id\n        self.rng = get_rng(rng=rng)\n        self.name = None\n        self.variable_types = None\n\n        self.create_save_directory(self._save_to)\n\n        openml.config.apikey = '610344db6388d9ba34f6db45a3cf71de'\n        openml.config.set_cache_directory(str(self._save_to))\n\n    def load(self):\n        \"\"\"\n        Loads dataset from OpenML in config_file.data_directory.\n        Downloads data if necessary.\n        \"\"\"\n\n        X_train, y_train, X_test, y_test, variable_types, name = \\\n            _load_data(self.task_id)\n\n        self.X_train = X_train\n        self.y_train = y_train\n        self.X_test = X_test\n        self.y_test = y_test\n        self.variable_types = variable_types\n        self.name = name\n\n        return self.X_train, self.y_train, self.X_test, self.y_test\n"
  },
  {
    "path": "transopt/utils/pareto.py",
    "content": "'''\nPareto-related tools.\n'''\n\nimport numpy as np\nfrom collections.abc import Iterable\nfrom pymoo.indicators.hv import Hypervolume\n\n\ndef convert_minimization(Y, obj_type=None):\n    '''\n    Convert maximization to minimization.\n\n    Example usage:\n    Y = np.array([[1, 4, 3], [2, 1, 4], [3, 2, 2]])\n    obj_type = ['min', 'max', 'min']\n    Y_minimized = convert_minimization(Y, obj_type)\n    '''\n    if obj_type is None: \n        return Y\n\n    if isinstance(obj_type, str):\n        obj_type = [obj_type] * Y.shape[1]\n    assert isinstance(obj_type, Iterable), f'Objective type {type(obj_type)} is not supported'\n\n    maxm_idx = np.array(obj_type) == 'max'\n    Y = Y.copy()\n    Y[:, maxm_idx] = -Y[:, maxm_idx]\n\n    return Y\n\ndef find_pareto_front(Y, return_index=False, obj_type=None, eps=1e-8):\n    '''\n    Find pareto front (undominated part) of the input performance data.\n    '''\n    if len(Y) == 0: return np.array([])\n\n    Y = convert_minimization(Y, obj_type)\n\n    sorted_indices = np.argsort(Y.T[0])\n    pareto_indices = []\n    for idx in sorted_indices:\n        # check domination relationship\n        if not (np.logical_and((Y[idx] - Y > -eps).all(axis=1), (Y[idx] - Y > eps).any(axis=1))).any():\n            pareto_indices.append(idx)\n    pareto_front = np.atleast_2d(Y[pareto_indices].copy())\n\n    if return_index:\n        return pareto_front, pareto_indices\n    else:\n        return pareto_front\n    \n\ndef check_pareto(Y, obj_type=None):\n    '''\n    Check pareto optimality of the input performance data\n\n    Example usage:\n    Y = np.array([[1, 2], [2, 1], [1.5, 1.5]])\n    pareto_optimal = check_pareto(Y)\n    '''\n    Y = convert_minimization(Y, obj_type)\n\n    # find pareto indices\n    sorted_indices = np.argsort(Y.T[0])\n    pareto = np.zeros(len(Y), dtype=bool)\n    for idx in sorted_indices:\n        # check domination relationship\n        if not (np.logical_and((Y <= Y[idx]).all(axis=1), (Y < Y[idx]).any(axis=1))).any():\n            pareto[idx] = True\n    return pareto\n\n\ndef calc_hypervolume(Y, ref_point, obj_type=None):\n    '''\n    Calculate hypervolume\n\n    Example usage:\n    Y = np.array([[1, 2], [2, 1], [1.5, 1.5]])\n    ref_point = np.array([2.5, 2.5])\n    hypervolume = calc_hypervolume(Y, ref_point)\n    '''\n    Y = convert_minimization(Y, obj_type)\n\n    return Hypervolume(ref_point=ref_point).do(Y)\n\n\ndef calc_pred_error(Y, Y_pred_mean, average=False):\n    '''\n    Calculate prediction error\n    '''\n    assert len(Y.shape) == len(Y_pred_mean.shape) == 2\n    pred_error = np.abs(Y - Y_pred_mean)\n    if average:\n        pred_error = np.sum(pred_error, axis=0) / len(Y)\n    return pred_error"
  },
  {
    "path": "transopt/utils/path.py",
    "content": "import os\nfrom pathlib import Path\n\n\ndef get_library_path():\n    home = Path.home()\n    library_dir_name = \"transopt_files\"\n    library_path = home / library_dir_name\n\n    if not library_path.exists():\n        library_path.mkdir(parents=True, exist_ok=True)\n\n    return library_path\n\ndef get_absolut_path():\n    lib_path = get_library_path()\n    absolut_dir_name = \"Absolut\"\n    absolut_path = lib_path / absolut_dir_name\n    \n    if not absolut_path.exists():\n        absolut_path.mkdir(parents=True, exist_ok=True)\n    \n    return absolut_path\n\n\ndef get_log_file_path():\n    lib_path = get_library_path()\n    log_filename = \"runtime.log\"\n    return lib_path / log_filename\n"
  },
  {
    "path": "transopt/utils/plot.py",
    "content": "import matplotlib.pyplot as plt\nfrom matplotlib import cm\n\n\ndef plot2D(X, Y, c='black', ls='', marker='o', fillstyle=None, label=None, ax=None, file=None, show=False,\n           show_legend=False, bounds=None,title=None,disconnect=None):\n    if ax is None:\n        _, ax = plt.subplots(1, 1)\n    if disconnect is None:\n        ax.plot(X, Y, c=c, ls=ls, marker=marker, label=label,fillstyle=fillstyle)\n    else:\n        for l in disconnect:\n            ax.plot(X[l], Y[l], c=c, ls=ls, marker=marker, label=label, fillstyle=fillstyle)\n    ax.set_xlabel('$f_1(\\mathbf{x})$', fontsize=13)\n    ax.set_ylabel('$f_2(\\mathbf{x})$', fontsize=13)\n    ax.tick_params(axis='both', labelsize=13)\n\n    if show or file is not None:\n        plt.grid()\n        if show_legend:\n            plt.legend()\n        if bounds is not None:\n            plt.xlim((bounds[0, 0], bounds[0, 1]))\n            plt.ylim((bounds[1, 0], bounds[1, 1]))\n    if title:\n        plt.title(title)\n    if file is not None:\n        plt.savefig(file, format='pdf')\n    if show:\n        plt.show()\n    if file is None and show is False:\n        return ax\n    return None\n\n\ndef plot3D(X, Y, Z, c='black', ls='', marker='o', fillstyle=None, label=None, ax=None, file=None, show=False,\n           show_legend=False, bounds=None,title=None):\n    if ax is None:\n        _, ax = plt.subplots(subplot_kw={\"projection\": \"3d\"})\n    ax.plot(X, Y, Z, c=c, ls=ls, marker=marker, label=label,fillstyle=fillstyle)\n    ax.set_xlabel('$f_1(\\mathbf{x})$', fontsize=13)\n    ax.set_ylabel('$f_2(\\mathbf{x})$', fontsize=13)\n    ax.set_zlabel('$f_3(\\mathbf{x})$', fontsize=13)\n    ax.tick_params(axis='both', labelsize=13)\n\n    if show or file is not None:\n        plt.grid()\n        if show_legend:\n            plt.legend()\n        if bounds is not None:\n            ax.set_xlim((bounds[0, 0], bounds[0, 1]))\n            ax.set_ylim((bounds[1, 0], bounds[1, 1]))\n            ax.set_zlim((bounds[2, 0], bounds[2, 1]))\n    if title:\n        plt.title(title)\n    if file is not None:\n        plt.savefig(file, format='pdf')\n    if show:\n        plt.show()\n    if file is None and show is False:\n        return ax\n    return None\n\n\ndef surface3D(X_grid, Y_grid, cmap=cm.Blues, ax=None, file=None, show=False,label=None):\n    if ax is None:\n        _, ax = plt.subplots(subplot_kw={\"projection\": \"3d\"})\n    ax.set_xlabel('$x_1$', fontsize=13)\n    ax.set_ylabel('$x_2$', fontsize=13)\n    ax.set_zlabel('f(\\mathbf{x})', fontsize=13)\n    ax.plot_surface(X_grid, X_grid.T, Y_grid, cmap=cmap,label=label)\n    if file is not None:\n        plt.grid()\n        plt.savefig(file, format='pdf')\n    if show:\n        plt.grid()\n        plt.show()\n    if file is None and show is False:\n        return ax\n    return None"
  },
  {
    "path": "transopt/utils/profile.py",
    "content": "import cProfile\nimport functools\n\ndef profile_function(filename=None):\n    def profiler_decorator(func):\n        @functools.wraps(func)\n        def wrapper(*args, **kwargs):\n            profiler = cProfile.Profile()\n            profiler.enable()\n            \n            # Execute the function\n            result = func(*args, **kwargs)\n            \n            profiler.disable()\n            # Save stats to a file\n            profiler.dump_stats(filename if filename else func.__name__ + '_profile.prof')\n            \n            return result\n        return wrapper\n    return profiler_decorator"
  },
  {
    "path": "transopt/utils/rng_helper.py",
    "content": "\"\"\"\nThis file includes code adapted from HPOBench (https://github.com/automl/HPOBench),\nwhich is licensed under the Apache License 2.0. A copy of the license can be\nfound at http://www.apache.org/licenses/LICENSE-2.0.\n\"\"\"\n\n\n\"\"\" Helper functions to easily obtain randomState \"\"\"\nfrom typing import Union, Tuple, List\n\nimport numpy as np\n\n\ndef get_rng(rng: Union[int, np.random.RandomState, None] = None,\n            self_rng: Union[int, np.random.RandomState, None] = None) -> np.random.RandomState:\n    \"\"\"\n    Helper function to obtain RandomState from int or create a new one.\n\n    Sometimes a default random state (self_rng) is already available, but a\n    new random state is desired. In this case ``rng`` is not None and not already\n    a random state (int or None) -> a new random state is created.\n    If ``rng`` is already a randomState, it is just returned.\n    Same if ``rng`` is None, but the default rng is given.\n\n    Parameters\n    ----------\n    rng : int, np.random.RandomState, None\n    self_rng : np.random.RandomState, None\n\n    Returns\n    -------\n    np.random.RandomState\n    \"\"\"\n\n    if rng is not None:\n        return _cast_int_to_random_state(rng)\n    if rng is None and self_rng is not None:\n        return _cast_int_to_random_state(self_rng)\n    return np.random.RandomState()\n\n\ndef _cast_int_to_random_state(rng: Union[int, np.random.RandomState]) -> np.random.RandomState:\n    \"\"\"\n    Helper function to cast ``rng`` from int to np.random.RandomState if necessary.\n\n    Parameters\n    ----------\n    rng : int, np.random.RandomState\n\n    Returns\n    -------\n    np.random.RandomState\n    \"\"\"\n    if isinstance(rng, np.random.RandomState):\n        return rng\n    if int(rng) == rng:\n        # As seed is sometimes -1 (e.g. if SMAC optimizes a deterministic function) -> use abs()\n        return np.random.RandomState(np.abs(rng))\n    raise ValueError(f\"{rng} is neither a number nor a RandomState. Initializing RandomState failed\")\n\n\ndef serialize_random_state(random_state: np.random.RandomState) -> Tuple[int, List, int, int, int]:\n    (rnd0, rnd1, rnd2, rnd3, rnd4) = random_state.get_state()\n    rnd1 = rnd1.tolist()\n    return rnd0, rnd1, rnd2, rnd3, rnd4\n\n\ndef deserialize_random_state(random_state: Tuple[int, List, int, int, int]) -> np.random.RandomState:\n    (rnd0, rnd1, rnd2, rnd3, rnd4) = random_state\n    rnd1 = [np.uint32(number) for number in rnd1]\n    random_state = np.random.RandomState()\n    random_state.set_state((rnd0, rnd1, rnd2, rnd3, rnd4))\n    return random_state\n"
  },
  {
    "path": "transopt/utils/serialization.py",
    "content": "import numpy as np\nfrom abc import abstractmethod, ABC\nfrom dataclasses import dataclass\nfrom typing import Dict, Hashable, Tuple, List\n\n\n@dataclass\nclass InputData:\n    X: np.ndarray\n\n\n@dataclass\nclass TaskData:\n    X: np.ndarray\n    Y: np.ndarray\n\n\n\n# def vectors_to_ndarray(keys_order, X: List[Dict]) -> np.ndarray:\n#     \"\"\"Convert a list of input_vectors to a ndarray.\"\"\"\n#     # Converting dictionaries to lists using the order from keys_order\n#     data = [[vec[key] for key in keys_order] for vec in X]\n\n#     # Converting lists to ndarray\n#     ndarray = np.array(data)\n\n#     return ndarray\n\n# def ndarray_to_vectors(keys_order, ndarray: np.ndarray) -> List[Dict]:\n#     \"\"\"Convert a ndarray to a list of dictionaries.\"\"\"\n#     # Converting ndarray to lists of values\n#     data = ndarray.tolist()\n\n#     # Converting lists of values to dictionaries using keys from keys_order\n#     input_vectors = [{key: value for key, value in zip(keys_order, row)} for row in data]\n\n#     return input_vectors\n\ndef output_to_ndarray(Y: List[Dict]) -> np.ndarray:\n    \"\"\"Extract function_value from each output and convert to ndarray.\"\"\"\n    # Extracting function_value from each dictionary in the list\n    function_values = [[y for name, y in item.items()] for item in Y]\n\n    # Converting list to ndarray\n    ndarray = np.array(function_values)\n\n    return ndarray\n\ndef multioutput_to_ndarray(output_value: List[Dict], num_output:int) -> np.ndarray:\n    \"\"\"Extract function_value from each output and convert to ndarray.\"\"\"\n    # Extracting function_value from each dictionary in the list\n    function_values = []\n    for i in range(1, num_output+1):\n        function_values.append([item[f'function_value_{i}'] for item in output_value])\n\n    # Converting list to ndarray\n    ndarray = np.array(function_values)\n\n    return ndarray\n\n\ndef convert_np_to_bulidin(obj):\n    if isinstance(obj, np.integer):\n        return int(obj)\n    elif isinstance(obj, np.floating):\n        return float(obj)\n    elif isinstance(obj, np.ndarray):\n        return obj.tolist()\n    elif isinstance(obj, dict):\n        return {key: convert_np_to_bulidin(value) for key, value in obj.items()}\n    elif isinstance(obj, list):\n        return [convert_np_to_bulidin(item) for item in obj]\n    else:\n        return obj"
  },
  {
    "path": "transopt/utils/sk.py",
    "content": "#!/usr/bin/env python3\n# vim: sta:et:sw=2:ts=2:sts=2 :\n\nfrom copy import deepcopy as kopy\nimport sys,random\n\n\"\"\"\nScott-Knot test + non parametric effect size + significance tests.\nTim Menzies, 2019. Share and enjoy. No warranty. Caveat Emptor.\n\nAccepts data as per the following exmaple (you can ignore the \"*n\"\nstuff, that is just there for the purposes of demos on larger\nand larger data)\n\nOuputs treatments, clustered such that things that have similar\nresults get the same ranks.\n\nFor a demo of this code, just run\n\n    python3 sk.py\n\n\"\"\"\n\n#-----------------------------------------------------\n# Examples\n\ndef skDemo(n=5) :\n  #Rx.data is one way to run the code\n  return Rx.data( x1 =[ 0.12, 0.21 ,0.51, 0.7]*n,\n                  x2  =[0.6  ,0.7 , 0.8 , 0.89]*n,\n                  x3  =[0.13 ,0.23, 0.38 , 0.38]*n,\n                  x4  =[0.6  ,0.7,  0.8 , 0.9]*n,\n                  x5  =[0.1  ,0.2,  0.3 , 0.4]*n)\n\n\"\"\"\nAnother is to make a file\n\nx1  0.34  0.49  0.51  0.6\nx2  0.6   0.7   0.8   0.9\nx3  0.15  0.25  0.4   0.35\nx4  0.6   0.7   0.8   0.9\nx5  0.1   0.2   0.3   0.4\n\nThen call \n\n   Rx.fileIn( fileName )\n\n\"\"\"\n\n#-----------------------------------------------------\n# Config\n\nclass o:\n  def __init__(i,**d) : i.__dict__.update(**d)\n\nclass THE:\n  cliffs = o(dull= [0.147, # small\n                    0.33,  # medium\n                    0.474 # large\n                    ][0])\n  bs=     o( conf=0.05,\n             b=500)\n  mine =  o( private=\"_\")\n  char =  o( skip=\"?\")\n  rx   =  o( show=\"%4s %10s %s\")\n  tile =  o( width=50,\n             chops=[0.1 ,0.3,0.5,0.7,0.9],\n             marks=[\" \" ,\"-\",\"-\",\"-\",\" \"],\n             bar=\"|\",\n             star=\"*\",\n             show=\" %5.3f\")\n#-----------------------------------------------------\ndef cliffsDeltaSlow(lst1,lst2, dull = THE.cliffs.dull):\n  \"\"\"Returns true if there are more than 'dull' difference.\n     Warning: O(N)^2.\"\"\"\n  n= gt = lt = 0.0\n  for x in lst1:\n    for y in lst2:\n      n += 1\n      if x > y:  gt += 1\n      if x < y:  lt += 1\n  return abs(lt - gt)/n <= dull\n\ndef cliffsDelta(lst1, lst2,  dull=THE.cliffs.dull):\n  \"By pre-soring the lists, this cliffsDelta runs in NlogN time\"\n  def runs(lst):\n    for j,two in enumerate(lst):\n      if j == 0: one,i = two,0\n      if one!=two:\n        yield j - i,one\n        i = j\n      one=two\n    yield j - i + 1,two\n  #---------------------\n  m, n = len(lst1), len(lst2)\n  lst2 = sorted(lst2)\n  j = more = less = 0\n  for repeats,x in runs(sorted(lst1)):\n    while j <= (n - 1) and lst2[j] <  x: j += 1\n    more += j*repeats\n    while j <= (n - 1) and lst2[j] == x: j += 1\n    less += (n - j)*repeats\n  d= (more - less) / (m*n)\n  return abs(d)  <= dull\n\ndef bootstrap(y0,z0,conf=THE.bs.conf,b=THE.bs.b):\n  \"\"\"\n  two  lists y0,z0 are the same if the same patterns can be seen in all of them, as well\n  as in 100s to 1000s  sub-samples from each. \n  From p220 to 223 of the Efron text  'introduction to the boostrap'.\n  Typically, conf=0.05 and b is 100s to 1000s.\n  \"\"\"\n  class Sum():\n    def __init__(i,some=[]):\n      i.sum = i.n = i.mu = 0 ; i.all=[]\n      for one in some: i.put(one)\n    def put(i,x):\n      i.all.append(x);\n      i.sum +=x; i.n += 1; i.mu = float(i.sum)/i.n\n    def __add__(i1,i2): return Sum(i1.all + i2.all)\n  def testStatistic(y,z):\n     tmp1 = tmp2 = 0\n     for y1 in y.all: tmp1 += (y1 - y.mu)**2\n     for z1 in z.all: tmp2 += (z1 - z.mu)**2\n     s1    = float(tmp1)/(y.n - 0.9)\n     s2    = float(tmp2)/(z.n - 0.9)\n     delta = z.mu - y.mu\n     if s1+s2:\n       delta =  delta/((s1/y.n + s2/z.n)**0.5)\n     return delta\n  def one(lst): return lst[ int(any(len(lst))) ]\n  def any(n)  : return random.uniform(0,n)\n  y,z  = Sum(y0), Sum(z0)\n  x    = y + z\n  baseline = testStatistic(y,z)\n  yhat = [y1 - y.mu + x.mu for y1 in y.all]\n  zhat = [z1 - z.mu + x.mu for z1 in z.all]\n  bigger = 0\n  for i in range(b):\n    if testStatistic(Sum([one(yhat) for _ in yhat]),\n                     Sum([one(zhat) for _ in zhat])) > baseline:\n      bigger += 1\n  return bigger / b >= conf\n\n#-------------------------------------------------------\n# misc functions\ndef same(x): return x\n\nclass Mine:\n  \"class that, amongst other times, pretty prints objects\"\n  oid = 0\n  def identify(i):\n    Mine.oid += 1\n    i.oid = Mine.oid\n    return i.oid\n  def __repr__(i):\n    pairs = sorted([(k, v) for k, v in i.__dict__.items()\n                    if k[0] != THE.mine.private])\n    pre = i.__class__.__name__ + '{'\n    def q(z):\n     if isinstance(z,str): return \"'%s'\" % z\n     if callable(z): return \"fun(%s)\" % z.__name__\n     return str(z)\n    return pre + \", \".join(['%s=%s' % (k, q(v))])\n\n#-------------------------------------------------------\nclass Rx(Mine):\n  \"place to manage pairs of (TreatmentName,ListofResults)\"\n  def __init__(i, rx=\"\",vals=[], key=same):\n    i.rx   = rx\n    i.vals = sorted([x for x in vals if x != THE.char.skip])\n    i.n    = len(i.vals)\n    i.med  = i.vals[int(i.n/2)]\n    i.mu   = sum(i.vals)/i.n\n    i.rank = 1\n  def tiles(i,lo=0,hi=1): return  xtile(i.vals,lo,hi)\n  def __lt__(i,j):        return i.med < j.med\n  def __eq__(i,j):\n    return cliffsDelta(i.vals,j.vals) and \\\n            bootstrap(i.vals,j.vals)\n  def __repr__(i):\n    return '%4s %10s %s' % (i.rank, i.rx, i.tiles())\n  def xpect(i,j,b4):\n    \"Expected value of difference in emans before and after a split\"\n    n = i.n + j.n\n    return i.n/n * (b4.med- i.med)**2 + j.n/n * (j.med-b4.med)**2\n\n  #-- end instance methods --------------------------\n\n  @staticmethod\n  def data(**d):\n    \"convert dictionary to list of treatments\"\n    return [Rx(k,v) for k,v in d.items()]\n\n  @staticmethod\n  def fileIn(f):\n    d={}\n    what=None\n    for word in words(f):\n       x = thing(word)\n       if isinstance(x,str): \n          what=x\n          d[what] = d.get(what,[])\n       else:\n          d[what] += [x]\n    # print('---------------')\n    # print(Rx.data(**d))\n    Rx.write(Rx.sk(Rx.data(**d)))\n\n  @staticmethod\n  def sum(rxs):\n    \"make a new rx from all the rxs' vals\"\n    all = []\n    for rx in rxs:\n        for val in rx.vals:\n            all += [val]\n    return Rx(vals=all)\n\n  @staticmethod\n  def show(rxs):\n    \"pretty print set of treatments\"\n    tmp=Rx.sum(rxs)\n    lo,hi=tmp.vals[0], tmp.vals[-1]\n    for rx in sorted(rxs):\n        print(THE.rx.show % (rx.rank, rx.rx, rx.tiles()))\n\n  @staticmethod\n  def write(rxs):\n    \"pretty write set of treatments\"\n    tmp=Rx.sum(rxs)\n    lo,hi=tmp.vals[0], tmp.vals[-1]\n    with open('./scott_knot.txt', 'a') as write_f:\n        for rx in sorted(rxs):\n            write_f.write(THE.rx.show % (rx.rank, rx.rx, rx.tiles()) + '\\r\\n')\n\n\n  @staticmethod\n  def sk(rxs):\n    \"sort treatments and rank them\"\n    def divide(lo,hi,b4,rank):\n      cut = left=right=None\n      best = 0\n      for j in range(lo+1,hi):\n          left0  = Rx.sum( rxs[lo:j] )\n          right0 = Rx.sum( rxs[j:hi] )\n          now    = left0.xpect(right0, b4)\n          if now > best:\n              if left0 != right0:\n                  best, cut,left,right = now,j,kopy(left0),kopy(right0)\n      if cut:\n        rank = divide(lo, cut, left, rank) + 1\n        rank = divide(cut ,hi, right,rank)\n      else:\n        for rx in rxs[lo:hi]:\n          rx.rank = rank\n      return rank\n    #-- sk main\n    rxs=sorted(rxs)\n    divide(0, len(rxs),Rx.sum(rxs),1)\n    return rxs\n\n#-------------------------------------------------------\ndef pairs(lst):\n    \"Return all pairs of items i,i+1 from a list.\"\n    last=lst[0]\n    for i in lst[1:]:\n         yield last,i\n         last = i\n\ndef words(f):\n  with open(f) as fp:\n    for line in fp:\n       for word in line.split():\n          yield word\n\ndef xtile(lst,lo,hi,\n             width= THE.tile.width,\n             chops= THE.tile.chops,\n             marks= THE.tile.marks,\n             bar=   THE.tile.bar,\n             star=  THE.tile.star,\n             show=  THE.tile.show):\n  \"\"\"The function _xtile_ takes a list of (possibly)\n  unsorted numbers and presents them as a horizontal\n  xtile chart (in ascii format). The default is a\n  contracted _quintile_ that shows the\n  10,30,50,70,90 breaks in the data (but this can be\n  changed- see the optional flags of the function).\n  \"\"\"\n  def pos(p)   : return ordered[int(len(lst)*p)]\n  def place(x) :\n    return int(width*float((x - lo))/(hi - lo+0.00001))\n  def pretty(lst) :\n    return ', '.join([show % x for x in lst])\n  ordered = sorted(lst)\n  lo      = min(lo,ordered[0])\n  hi      = max(hi,ordered[-1])\n  what    = [pos(p)   for p in chops]\n  where   = [place(n) for n in  what]\n  out     = [\" \"] * width\n  for one,two in pairs(where):\n    for i in range(one,two):\n      out[i] = marks[0]\n    marks = marks[1:]\n  out[int(width/2)]    = bar\n  out[place(pos(0.5))] = star\n  return '('+''.join(out) +  \"),\" +  pretty(what)\n\ndef thing(x):\n  \"Numbers become numbers; every other x is a symbol.\"\n  try: return int(x)\n  except ValueError:\n    try: return float(x)\n    except ValueError:\n      return x\n\n#-------------------------------------------------------\ndef _cliffsDelta():\n  \"demo function\"\n  lst1=[1,2,3,4,5,6,7]*100\n  n=1\n  for _ in range(10):\n      lst2=[x*n for x in lst1]\n      print(cliffsDelta(lst1,lst2),n) # should return False\n      n*=1.03\n\ndef bsTest(n=1000,mu1=10,sigma1=1,mu2=10.2,sigma2=1):\n   def g(mu,sigma) : return random.gauss(mu,sigma)\n   x = [g(mu1,sigma1) for i in range(n)]\n   y = [g(mu2,sigma2) for i in range(n)]\n   return n,mu1,sigma1,mu2,sigma2,\\\n          'same' if bootstrap(x,y) else 'different'\n\n#-------------------------------------------------------\n\nif __name__ == \"__main__\":\n  # random.seed(1)\n\n    b = [[2], [1, 4, 5, 7,11]]\n    a = Rx.data(x1=[1], x2=[2,3,4],x3=[3])\n\n\n\n    Rx.show(Rx.sk(a))\n\n\n"
  },
  {
    "path": "transopt/utils/weights.py",
    "content": "import math\nimport numpy as np\n\n\ndef _set_weight(w, c, v, unit, s, n_obj, dim):\n    if dim == n_obj:\n        v = np.zeros(shape=(n_obj, 1))\n    if dim == 1:\n        c = c + 1\n        v[0] = unit - s\n        w[:, c - 1] = v[:, 0]\n        return w, c\n\n    for i in range(unit - s + 1):\n        v[dim - 1] = i\n        w, c = _set_weight(w, c, v, unit, s + i, n_obj, dim - 1)\n    return w, c\n\n\ndef _no_weight(unit, s, dim):\n    m = 0\n    if dim == 1:\n        m = 1\n        return m\n    for i in range(unit - s + 1):\n        m = m + _no_weight(unit, s + i, dim - 1)\n    return m\n\n\ndef init_weight(n_obj, n_sample):\n    if n_obj == 1:\n        return np.expand_dims(np.linspace(0,1,n_sample),-1)\n    u = math.floor(math.pow(n_sample, 1.0 / (n_obj - 1))) - 2\n\n    m = 0\n    while m < n_sample:\n        u = u + 1\n        m = _no_weight(u, 0, n_obj)\n    if m != n_sample:\n        print(f'Warning number of weights {n_sample} except {m}!')\n    w = np.zeros(shape=(n_obj, m))\n    c = 0\n    v = np.zeros(shape=(n_obj, 1))\n    w, c = _set_weight(w, c, v, u, 0, n_obj, n_obj)\n    w = w / (u + 0.0)\n    return w.T\n\ndef tchebycheff(X, W, ideal=None, normalize=False):\n    \"\"\"\n    :param X:  data points np array with (1, n_var) or (n_sample, n_var)\n    :param W:  weights np array with (1, n_var) or (n_sample, n_var)\n    :param ideal:\n    :param normalize:\n    :param return_index:\n    :return: np array with (n_sample, )\n    \"\"\"\n    X = np.atleast_2d(X)\n    W = np.atleast_2d(W)\n\n    n_sample = X.shape[0]\n    n_weight = W.shape[0]\n\n    if n_sample == 1 and n_weight != 1:\n        X = np.tile(X, (n_weight, 1))\n    if n_weight == 1 and n_sample != 1:\n        W = np.tile(W, (n_sample, 1))\n\n    if ideal is None:\n        ideal = np.zeros((1, X.shape[1]))\n    if normalize:\n        norm_x = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0)) - ideal\n    else:\n        norm_x = X - ideal\n\n        return np.expand_dims(np.max(norm_x * W, axis=1), -1)"
  },
  {
    "path": "webui/.gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n\nnode_modules/\n"
  },
  {
    "path": "webui/LICENSE.md",
    "content": "MIT License\n\nCopyright (c) 2022 Dashwind - Admin Dashboard Template\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
  },
  {
    "path": "webui/package.json",
    "content": "{\n  \"name\": \"admin-dashboard-template-dashwind\",\n  \"version\": \"1.0.0\",\n  \"description\": \"Admin Dashboard template built with create-react-app, tailwind css and daisy UI. Template uses rich tailwind css utility classes and have components of daisy UI, also have redux toolkit implemented for store management.\",\n  \"scripts\": {\n    \"start\": \"react-scripts start\",\n    \"build\": \"react-scripts build\",\n    \"test\": \"react-scripts test\",\n    \"eject\": \"react-scripts eject\"\n  },\n  \"dependencies\": {\n    \"@chatui/core\": \"^2.4.2\",\n    \"@heroicons/react\": \"^2.0.13\",\n    \"@reduxjs/toolkit\": \"^1.9.0\",\n    \"@testing-library/jest-dom\": \"^5.16.5\",\n    \"@testing-library/react\": \"^13.4.0\",\n    \"@testing-library/user-event\": \"^13.5.0\",\n    \"antd\": \"^5.20.5\",\n    \"axios\": \"^1.1.3\",\n    \"bizcharts\": \"^4.1.23\",\n    \"capitalize-the-first-letter\": \"^1.0.8\",\n    \"chart.js\": \"^4.0.1\",\n    \"dayjs\": \"^1.11.7\",\n    \"echarts\": \"^5.5.1\",\n    \"echarts-for-react\": \"^3.0.2\",\n    \"moment\": \"^2.29.4\",\n    \"react\": \"^18.3.1\",\n    \"react-chartjs-2\": \"^5.0.1\",\n    \"react-dom\": \"^18.3.1\",\n    \"react-notifications\": \"^1.7.4\",\n    \"react-redux\": \"^8.0.5\",\n    \"react-router-dom\": \"^6.4.3\",\n    \"react-scripts\": \"^5.0.1\",\n    \"react-tailwindcss-datepicker\": \"^1.6.0\",\n    \"reactstrap\": \"^9.2.2\",\n    \"theme-change\": \"^2.2.0\",\n    \"web-vitals\": \"^2.1.4\"\n  },\n  \"repository\": {\n    \"type\": \"git\",\n    \"url\": \"git+https://github.com/srobbin01/tailwind-dashboard-template-dashwind\"\n  },\n  \"keywords\": [\n    \"reactjs\",\n    \"tailwind-css\",\n    \"starter-kit\",\n    \"saas-starter-kit\",\n    \"reduxt-toolkit-dashboard-template\",\n    \"daisyui-template\",\n    \"dashboard-template\",\n    \"react-router\",\n    \"react-charts\"\n  ],\n  \"author\": \"srobbin01\",\n  \"license\": \"ISC\",\n  \"bugs\": {\n    \"url\": \"https://github.com/srobbin01/tailwind-dashboard-template-dashwind/issues\"\n  },\n  \"homepage\": \"\",\n  \"eslintConfig\": {\n    \"extends\": [\n      \"react-app\",\n      \"react-app/jest\"\n    ]\n  },\n  \"browserslist\": {\n    \"production\": [\n      \">0.2%\",\n      \"not dead\",\n      \"not op_mini all\"\n    ],\n    \"development\": [\n      \"last 1 chrome version\",\n      \"last 1 firefox version\",\n      \"last 1 safari version\"\n    ]\n  },\n  \"devDependencies\": {\n    \"@tailwindcss/typography\": \"^0.5.8\",\n    \"autoprefixer\": \"^10.4.13\",\n    \"daisyui\": \"^4.4.19\",\n    \"postcss\": \"^8.4.19\",\n    \"tailwindcss\": \"^3.3.6\"\n  }\n}\n"
  },
  {
    "path": "webui/public/index.html",
    "content": "<!DOCTYPE html>\n<html lang=\"en\">\n  <head>\n    <meta charset=\"utf-8\" />\n    <link rel=\"icon\" href=\"%PUBLIC_URL%/transopt.png\" />\n    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />\n    <meta name=\"theme-color\" content=\"#000000\" />\n    <meta\n      name=\"description\"\n      content=\"A free admin dashboard template using Daisy UI and React js.\"\n    />\n    <link rel=\"apple-touch-icon\" href=\"%PUBLIC_URL%/transopt.png\" />\n    <!--\n      manifest.json provides metadata used when your web app is installed on a\n      user's mobile device or desktop. See https://developers.google.com/web/fundamentals/web-app-manifest/\n    -->\n    <link rel=\"manifest\" href=\"%PUBLIC_URL%/manifest.json\" />\n    <!--\n      Notice the use of %PUBLIC_URL% in the tags above.\n      It will be replaced with the URL of the `public` folder during the build.\n      Only files inside the `public` folder can be referenced from the HTML.\n\n      Unlike \"/favicon.ico\" or \"favicon.ico\", \"%PUBLIC_URL%/favicon.ico\" will\n      work correctly both with client-side routing and a non-root public URL.\n      Learn how to configure a non-root public URL by running `npm run build`.\n    -->\n    <title> TransOPT </title>\n    <meta name=\"description\" content=\"Get a customizable and easily-themed admin dashboard template using Daisy UI and React js. Boost your productivity with pre-configured redux toolkit and other libraries.\">\n  </head>\n  <body>\n    <noscript>You need to enable JavaScript to run this app.</noscript>\n    <div id=\"root\"></div>\n    <!--\n      This HTML file is a template.\n      If you open it directly in the browser, you will see an empty page.\n\n      You can add webfonts, meta tags, or analytics to this file.\n      The build step will place the bundled scripts into the <body> tag.\n\n      To begin the development, run `npm start` or `yarn start`.\n      To create a production bundle, use `npm run build` or `yarn build`.\n    -->\n  </body>\n</html>\n"
  },
  {
    "path": "webui/public/manifest.json",
    "content": "{\n  \"short_name\": \"TransOPT\",\n  \"name\": \"TransOPT\",\n  \"icons\": [\n    {\n      \"src\": \"transopt.png\",\n      \"sizes\": \"64x64 32x32 24x24 16x16\",\n      \"type\": \"image/png\"\n    },\n    {\n      \"src\": \"transopt.png\",\n      \"type\": \"image/png\",\n      \"sizes\": \"192x192\"\n    },\n    {\n      \"src\": \"transopt.png\",\n      \"type\": \"image/png\",\n      \"sizes\": \"512x512\"\n    }\n  ],\n  \"start_url\": \".\",\n  \"display\": \"standalone\",\n  \"theme_color\": \"#000000\",\n  \"background_color\": \"#ffffff\"\n}\n"
  },
  {
    "path": "webui/public/robots.txt",
    "content": "# https://www.robotstxt.org/robotstxt.html\nUser-agent: *\nDisallow:\n"
  },
  {
    "path": "webui/src/App.css",
    "content": ".App {\n  text-align: center;\n}\n\n.App-logo {\n  height: 40vmin;\n  pointer-events: none;\n}\n\n@media (prefers-reduced-motion: no-preference) {\n  .App-logo {\n    animation: App-logo-spin infinite 20s linear;\n  }\n}\n\n.App-header {\n  background-color: #282c34;\n  min-height: 100vh;\n  display: flex;\n  flex-direction: column;\n  align-items: center;\n  justify-content: center;\n  font-size: calc(10px + 2vmin);\n  color: white;\n}\n\n.App-link {\n  color: #61dafb;\n}\n\n@keyframes App-logo-spin {\n  from {\n    transform: rotate(0deg);\n  }\n  to {\n    transform: rotate(360deg);\n  }\n}\n"
  },
  {
    "path": "webui/src/App.js",
    "content": "import React, { lazy, useEffect } from 'react'\nimport './App.css';\nimport { BrowserRouter as Router, Route, Routes, Navigate, Redirect, Switch} from 'react-router-dom'\n\n\nimport { themeChange } from 'theme-change'\nimport checkAuth from './app/auth';\nimport initializeApp from './app/init';\n\n\n\n// Importing pages\nconst Layout = lazy(() => import('./containers/Layout'))\n\n\n// Initializing different libraries\ninitializeApp()\n\n\n// Check for login and initialize axios\nconst token = checkAuth()\n\n\nfunction App() {\n\n  useEffect(() => {\n    // 👆 daisy UI themes initialization\n    themeChange(false)\n  }, [])\n\n\n  return (\n    <>\n      <Router>\n        <Routes>\n          <Route path=\"/app/*\" element={<Layout />} />\n          <Route path=\"*\" element={<Navigate to=\"/app/welcome\" replace />} />\n        </Routes>\n      </Router>\n    </>\n  )\n}\n\nexport default App\n"
  },
  {
    "path": "webui/src/App.test.js",
    "content": "import { render, screen } from '@testing-library/react';\nimport App from './App';\n\ntest('renders learn react link', () => {\n  render(<App />);\n  const linkElement = screen.getByText(/learn react/i);\n  expect(linkElement).toBeInTheDocument();\n});\n"
  },
  {
    "path": "webui/src/app/auth.js",
    "content": "import axios from \"axios\"\n\nconst checkAuth = () => {\n/*  Getting token value stored in localstorage, if token is not present we will open login page \n    for all internal dashboard routes  */\n    const TOKEN = localStorage.getItem(\"token\")\n    const PUBLIC_ROUTES = [\"login\", \"forgot-password\", \"register\", \"documentation\"]\n\n    const isPublicPage = PUBLIC_ROUTES.some( r => window.location.href.includes(r))\n\n    if(!TOKEN && !isPublicPage){\n        window.location.href = '/login'\n        return;\n    }else{\n        axios.defaults.headers.common['Authorization'] = `Bearer ${TOKEN}`\n\n        axios.interceptors.request.use(function (config) {\n            // UPDATE: Add this code to show global loading indicator\n            document.body.classList.add('loading-indicator');\n            return config\n          }, function (error) {\n            return Promise.reject(error);\n          });\n          \n          axios.interceptors.response.use(function (response) {\n            // UPDATE: Add this code to hide global loading indicator\n            document.body.classList.remove('loading-indicator');\n            return response;\n          }, function (error) {\n            document.body.classList.remove('loading-indicator');\n            return Promise.reject(error);\n          });\n        return TOKEN\n    }\n}\n\nexport default checkAuth"
  },
  {
    "path": "webui/src/app/init.js",
    "content": "import axios from \"axios\"\n\nconst initializeApp = () => {\n    \n    // Setting base URL for all API request via axios\n    axios.defaults.baseURL = process.env.REACT_APP_BASE_URL\n\n\n    if (!process.env.NODE_ENV || process.env.NODE_ENV === 'development') {\n        // dev code\n\n\n\n    } else {\n        // Prod build code\n\n\n\n        // Removing console.log from prod\n        console.log = () => {};\n\n\n        // init analytics here\n    }\n\n}\n\nexport default initializeApp"
  },
  {
    "path": "webui/src/app/store.js",
    "content": "import { configureStore } from '@reduxjs/toolkit'\nimport headerSlice from '../features/common/headerSlice'\nimport modalSlice from '../features/common/modalSlice'\nimport rightDrawerSlice from '../features/common/rightDrawerSlice'\nimport leadsSlice from '../features/leads/leadSlice'\n\nconst combinedReducer = {\n  header : headerSlice,\n  rightDrawer : rightDrawerSlice,\n  modal : modalSlice,\n  lead : leadsSlice\n}\n\nexport default configureStore({\n    reducer: combinedReducer\n})"
  },
  {
    "path": "webui/src/components/CalendarView/index.js",
    "content": "import { useEffect, useState } from \"react\";\nimport  ChevronLeftIcon from \"@heroicons/react/24/solid/ChevronLeftIcon\";\nimport  ChevronRightIcon  from \"@heroicons/react/24/solid/ChevronRightIcon\";\nimport moment from \"moment\";\nimport { CALENDAR_EVENT_STYLE } from \"./util\";\n\nconst THEME_BG = CALENDAR_EVENT_STYLE\n\nfunction CalendarView({calendarEvents, addNewEvent, openDayDetail}){\n\n    const today = moment().startOf('day')\n    const weekdays = [\"sun\", \"mon\", \"tue\", \"wed\", \"thu\", \"fri\", \"sat\"];\n    const colStartClasses = [\n      \"\",\n      \"col-start-2\",\n      \"col-start-3\",\n      \"col-start-4\",\n      \"col-start-5\",\n      \"col-start-6\",\n      \"col-start-7\",\n  ];\n\n    const [firstDayOfMonth, setFirstDayOfMonth] = useState(moment().startOf('month'))\n    const [events, setEvents] = useState([])\n    const [currMonth, setCurrMonth] = useState(() => moment(today).format(\"MMM-yyyy\"));\n\n    useEffect(() => {\n        setEvents(calendarEvents)\n    }, [calendarEvents])\n    \n\n    const allDaysInMonth = ()=> {\n        let start = moment(firstDayOfMonth).startOf('week')\n        let end = moment(moment(firstDayOfMonth).endOf('month')).endOf('week')\n        var days = [];\n        var day = start;\n        while (day <= end) {\n            days.push(day.toDate());\n            day = day.clone().add(1, 'd');\n        }\n        return days\n    }\n\n    const getEventsForCurrentDate = (date) => {\n        let filteredEvents = events.filter((e) => {return moment(date).isSame(moment(e.startTime), 'day') } )\n        if(filteredEvents.length > 2){\n            let originalLength = filteredEvents.length\n            filteredEvents = filteredEvents.slice(0, 2)\n            filteredEvents.push({title : `${originalLength - 2} more`, theme : \"MORE\"})\n        }\n        return filteredEvents\n    }\n\n    const openAllEventsDetail = (date, theme) => {\n        if(theme != \"MORE\")return 1\n        let filteredEvents = events.filter((e) => {return moment(date).isSame(moment(e.startTime), 'day') } ).map((e) => {return {title : e.title, theme : e.theme}})\n        openDayDetail({filteredEvents, title : moment(date).format(\"D MMM YYYY\")})\n    }\n\n    const isToday = (date) => {\n        return moment(date).isSame(moment(), 'day');\n    }\n\n    const isDifferentMonth = (date) => {\n        return moment(date).month() != moment(firstDayOfMonth).month() \n    }\n\n    const getPrevMonth = (event) => {\n        const firstDayOfPrevMonth = moment(firstDayOfMonth).add(-1, 'M').startOf('month');\n        setFirstDayOfMonth(firstDayOfPrevMonth)\n        setCurrMonth(moment(firstDayOfPrevMonth).format(\"MMM-yyyy\"));\n    };\n\n    const getCurrentMonth = (event) => {\n        const firstDayOfCurrMonth = moment().startOf('month');\n        setFirstDayOfMonth(firstDayOfCurrMonth)\n        setCurrMonth(moment(firstDayOfCurrMonth).format(\"MMM-yyyy\"));\n    };\n\n    const getNextMonth = (event) => {\n        const firstDayOfNextMonth = moment(firstDayOfMonth).add(1, 'M').startOf('month');\n        setFirstDayOfMonth(firstDayOfNextMonth)\n        setCurrMonth(moment(firstDayOfNextMonth).format(\"MMM-yyyy\"));\n    };\n \n    return(\n        <>\n      <div className=\"w-full  bg-base-100 p-4 rounded-lg\">\n        <div className=\"flex items-center justify-between\">\n          <div className=\"flex  justify-normal gap-2 sm:gap-4\">\n          <p className=\"font-semibold text-xl w-48\">\n                    {moment(firstDayOfMonth).format(\"MMMM yyyy\").toString()}<span className=\"text-xs ml-2 \">Beta</span>\n                </p>\n\n                    <button className=\"btn  btn-square btn-sm btn-ghost\"  onClick={getPrevMonth}><ChevronLeftIcon\n                    className=\"w-5 h-5\"\n                     \n                    /></button>\n                    <button className=\"btn  btn-sm btn-ghost normal-case\" onClick={getCurrentMonth}>\n                      \n                    Current Month</button>\n                     <button className=\"btn btn-square btn-sm btn-ghost\" onClick={getNextMonth}><ChevronRightIcon\n                    className=\"w-5 h-5\"\n                      \n                    /></button>\n            </div>\n            <div>\n                <button className=\"btn  btn-sm btn-ghost btn-outline normal-case\" onClick={addNewEvent}>Add New Event</button>\n            </div>\n            \n        </div>\n        <div className=\"my-4 divider\" />\n        <div className=\"grid grid-cols-7 gap-6 sm:gap-12 place-items-center\">\n          {weekdays.map((day, key) => {\n            return (\n              <div  className=\"text-xs capitalize\" key={key}>\n                {day}\n              </div>\n            );\n          })}\n        </div>\n\n             \n        <div className=\"grid grid-cols-7 mt-1  place-items-center\">\n          {allDaysInMonth().map((day, idx) => {\n            return (\n              <div key={idx} className={colStartClasses[moment(day).day().toString()] + \" border border-solid w-full h-28  \"}>\n                <p className={`inline-block flex items-center  justify-center h-8 w-8 rounded-full mx-1 mt-1 text-sm cursor-pointer hover:bg-base-300 ${isToday(day) && \" bg-blue-100 dark:bg-blue-400 dark:hover:bg-base-300 dark:text-white\"} ${isDifferentMonth(day) && \" text-slate-400 dark:text-slate-600\"}`} onClick={() => addNewEvent(day)}> { moment(day).format(\"D\") }</p>\n                {\n                    getEventsForCurrentDate(day).map((e, k) => {\n                        return <p key={k} onClick={() => openAllEventsDetail(day, e.theme)} className={`text-xs px-2 mt-1 truncate  ${THEME_BG[e.theme] || \"\"}`}>{e.title}</p>\n                    })\n                }\n              </div>\n            );\n          })}\n        </div>\n\n   \n      </div>\n        </>\n    )\n}\n\n\nexport default CalendarView"
  },
  {
    "path": "webui/src/components/CalendarView/util.js",
    "content": "const moment  = require(\"moment\");\n\nmodule.exports = Object.freeze({\n    CALENDAR_EVENT_STYLE : {\n        \"BLUE\" : \"bg-blue-200 dark:bg-blue-600 dark:text-blue-100\",\n        \"GREEN\" : \"bg-green-200 dark:bg-green-600 dark:text-green-100\",\n        \"PURPLE\" : \"bg-purple-200 dark:bg-purple-600 dark:text-purple-100\",\n        \"ORANGE\" : \"bg-orange-200 dark:bg-orange-600 dark:text-orange-100\",\n        \"PINK\" : \"bg-pink-200 dark:bg-pink-600 dark:text-pink-100\",\n        \"MORE\" : \"hover:underline cursor-pointer font-medium \"\n    }\n\n    \n});\n"
  },
  {
    "path": "webui/src/components/Cards/TitleCard.js",
    "content": "import Subtitle from \"../Typography/Subtitle\"\n\n  \n  function TitleCard({title, children, topMargin, TopSideButtons}){\n      return(\n          <div className={\"card w-full p-6 bg-base-100 shadow-xl \" + (topMargin || \"mt-6\")}>\n\n            {/* Title for Card */}\n              <Subtitle styleClass={TopSideButtons ? \"inline-block\" : \"\"}>\n                {title}\n\n                {/* Top side button, show only if present */}\n                {\n                    TopSideButtons && <div className=\"inline-block float-right\">{TopSideButtons}</div>\n                }\n              </Subtitle>\n              \n              <div className=\"divider mt-2\"></div>\n          \n              {/** Card Body */}\n              <div className='h-full w-full pb-6 bg-base-100'>\n                  {children}\n              </div>\n          </div>\n          \n      )\n  }\n  \n  \n  export default TitleCard"
  },
  {
    "path": "webui/src/components/Input/InputText.js",
    "content": "import { useState } from \"react\"\n\n\nfunction InputText({labelTitle, labelStyle, type, containerStyle, defaultValue, placeholder, updateFormValue, updateType}){\n\n    const [value, setValue] = useState(defaultValue)\n\n    const updateInputValue = (val) => {\n        setValue(val)\n        updateFormValue({updateType, value : val})\n    }\n\n    return(\n        <div className={`form-control w-full ${containerStyle}`}>\n            <label className=\"label\">\n                <span className={\"label-text text-base-content \" + labelStyle}>{labelTitle}</span>\n            </label>\n            <input type={type || \"text\"} value={value} placeholder={placeholder || \"\"} onChange={(e) => updateInputValue(e.target.value)}className=\"input  input-bordered w-full \" />\n        </div>\n    )\n}\n\n\nexport default InputText"
  },
  {
    "path": "webui/src/components/Input/SearchBar.js",
    "content": "\n\nimport React, { useEffect } from 'react'\n\nfunction SearchBar({searchText, styleClass, placeholderText, setSearchText}) {\n\n\n\nconst updateSearchInput = (value) => {\n    setSearchText(value)\n}\n\n  return (\n    <div className={\"inline-block \" + styleClass}>\n    <div className=\"input-group  relative flex flex-wrap items-stretch w-full \">\n      <input type=\"search\" value={searchText} placeholder={placeholderText || \"Search\"} onChange={(e) => updateSearchInput(e.target.value)} className=\"input input-sm input-bordered  w-full max-w-xs\" />\n  </div>\n</div>\n  )\n}\n\nexport default SearchBar\n"
  },
  {
    "path": "webui/src/components/Input/SelectBox.js",
    "content": "\nimport axios from 'axios'\nimport capitalize from 'capitalize-the-first-letter'\nimport React, { useState, useEffect } from 'react'\nimport InformationCircleIcon from '@heroicons/react/24/outline/InformationCircleIcon'\n\n\nfunction SelectBox(props){\n    \n    const {labelTitle, labelDescription, defaultValue, containerStyle, placeholder, labelStyle, options, updateType, updateFormValue} = props\n\n    const [value, setValue] = useState(defaultValue || \"\")\n\n\n    const updateValue = (newValue) =>{\n        updateFormValue({updateType, value : newValue})\n        setValue(newValue)\n    }\n\n\n    return (\n        <div className={`inline-block ${containerStyle}`}>\n            <label  className={`label  ${labelStyle}`}>\n                <div className=\"label-text\">{labelTitle}\n                {labelDescription && <div className=\"tooltip tooltip-right\" data-tip={labelDescription}><InformationCircleIcon className='w-4 h-4'/></div>}\n                </div>\n            </label>\n\n            <select className=\"select select-bordered w-full\" value={value} onChange={(e) => updateValue(e.target.value)}>\n                <option disabled value=\"PLACEHOLDER\">{placeholder}</option>\n                {\n                    options.map((o, k) => {\n                        return <option value={o.value || o.name} key={k}>{o.name}</option>\n                    })\n                }\n            </select>\n        </div>\n    )\n}\n\nexport default SelectBox\n"
  },
  {
    "path": "webui/src/components/Input/TextAreaInput.js",
    "content": "import { useState } from \"react\"\n\n\nfunction TextAreaInput({labelTitle, labelStyle, type, containerStyle, defaultValue, placeholder, updateFormValue, updateType}){\n\n    const [value, setValue] = useState(defaultValue)\n\n    const updateInputValue = (val) => {\n        setValue(val)\n        updateFormValue({updateType, value : val})\n    }\n\n    return(\n        <div className={`form-control w-full ${containerStyle}`}>\n            <label className=\"label\">\n                <span className={\"label-text text-base-content \" + labelStyle}>{labelTitle}</span>\n            </label>\n            <textarea value={value} className=\"textarea textarea-bordered w-full\" placeholder={placeholder || \"\"} onChange={(e) => updateInputValue(e.target.value)}></textarea>\n        </div>\n    )\n}\n\n\nexport default TextAreaInput"
  },
  {
    "path": "webui/src/components/Input/ToogleInput.js",
    "content": "import { useState } from \"react\"\n\n\nfunction ToogleInput({labelTitle, labelStyle, type, containerStyle, defaultValue, placeholder, updateFormValue, updateType}){\n\n    const [value, setValue] = useState(defaultValue)\n\n    const updateToogleValue = () => {\n        setValue(!value)\n        updateFormValue({updateType, value : !value})\n    }\n\n    return(\n        <div className={`form-control w-full ${containerStyle}`}>\n            <label className=\"label cursor-pointer\">\n                <span className={\"label-text text-base-content \" + labelStyle}>{labelTitle}</span>\n                <input type=\"checkbox\" className=\"toggle\" checked={value}  onChange={(e) => updateToogleValue()}/>\n            </label>\n        </div>\n    )\n}\n\n\nexport default ToogleInput\n"
  },
  {
    "path": "webui/src/components/Typography/ErrorText.js",
    "content": "function ErrorText({styleClass, children}){\n    return(\n        <p className={`text-center  text-error ${styleClass}`}>{children}</p>\n    )\n}\n\nexport default ErrorText"
  },
  {
    "path": "webui/src/components/Typography/HelperText.js",
    "content": "function HelperText({className, children}){\n    return(\n        <div className={`text-slate-400 ${className}`}>{children}</div>\n    )\n}\n\nexport default HelperText"
  },
  {
    "path": "webui/src/components/Typography/Subtitle.js",
    "content": " function Subtitle({styleClass, children}){\n    return(\n        <div className={`text-xl font-semibold ${styleClass}`}>{children}</div>\n    )\n}\n\nexport default Subtitle"
  },
  {
    "path": "webui/src/components/Typography/Title.js",
    "content": "function Title({className, children}){\n    return(\n        <p className={`text-2xl font-bold  ${className}`}>{children}</p>\n    )\n}\n\nexport default Title"
  },
  {
    "path": "webui/src/containers/Header.js",
    "content": "import { themeChange } from 'theme-change'\nimport React, {  useEffect, useState } from 'react'\nimport { useSelector, useDispatch } from 'react-redux'\nimport BellIcon  from '@heroicons/react/24/outline/BellIcon'\nimport Bars3Icon  from '@heroicons/react/24/outline/Bars3Icon'\nimport MoonIcon from '@heroicons/react/24/outline/MoonIcon'\nimport SunIcon from '@heroicons/react/24/outline/SunIcon'\nimport { openRightDrawer } from '../features/common/rightDrawerSlice';\nimport { RIGHT_DRAWER_TYPES } from '../utils/globalConstantUtil'\n\nimport { NavLink,  Routes, Link , useLocation} from 'react-router-dom'\n\n\nfunction Header(){\n\n    const dispatch = useDispatch()\n    const {noOfNotifications, pageTitle} = useSelector(state => state.header)\n    const [currentTheme, setCurrentTheme] = useState(localStorage.getItem(\"theme\"))\n\n    useEffect(() => {\n        themeChange(false)\n        if(currentTheme === null){\n            if (window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches ) {\n                setCurrentTheme(\"dark\")\n            }else{\n                setCurrentTheme(\"light\")\n            }\n        }\n        // 👆 false parameter is required for react project\n      }, [])\n\n\n    // Opening right sidebar for notification\n    const openNotification = () => {\n        dispatch(openRightDrawer({header : \"Notifications\", bodyType : RIGHT_DRAWER_TYPES.NOTIFICATION}))\n    }\n\n\n    function logoutUser(){\n        localStorage.clear();\n        window.location.href = '/'\n    }\n\n    return(\n        // navbar fixed  flex-none justify-between bg-base-300  z-10 shadow-md\n        \n        <>\n            <div className=\"navbar sticky top-0 bg-base-100  z-10 shadow-md \">\n\n\n                {/* Menu toogle for mobile view or small screen */}\n                <div className=\"flex-1\">\n                    <label htmlFor=\"left-sidebar-drawer\" className=\"btn btn-primary drawer-button lg:hidden\">\n                    <Bars3Icon className=\"h-5 inline-block w-5\"/></label>\n                    <h1 className=\"text-2xl font-semibold ml-2\">{pageTitle}</h1>\n                </div>\n\n                \n\n            <div className=\"flex-none \">\n\n                {/* Multiple theme selection, uncomment this if you want to enable multiple themes selection, \n                also includes corporate and retro themes in tailwind.config file */}\n                \n                {/* <select className=\"select select-sm mr-4\" data-choose-theme>\n                    <option disabled selected>Theme</option>\n                    <option value=\"light\">Default</option>\n                    <option value=\"dark\">Dark</option>\n                    <option value=\"corporate\">Corporate</option>\n                    <option value=\"retro\">Retro</option>\n                </select> */}\n\n\n            {/* Light and dark theme selection toogle **/}\n            <label className=\"swap \">\n                <input type=\"checkbox\"/>\n                <SunIcon data-set-theme=\"light\" data-act-class=\"ACTIVECLASS\" className={\"fill-current w-6 h-6 \"+(currentTheme === \"dark\" ? \"swap-on\" : \"swap-off\")}/>\n                <MoonIcon data-set-theme=\"dark\" data-act-class=\"ACTIVECLASS\" className={\"fill-current w-6 h-6 \"+(currentTheme === \"light\" ? \"swap-on\" : \"swap-off\")} />\n            </label>\n\n\n                {/* Notification icon */}\n                {/* <button className=\"btn btn-ghost ml-4  btn-circle\" onClick={() => openNotification()}>\n                    <div className=\"indicator\">\n                        <BellIcon className=\"h-6 w-6\"/>\n                        {noOfNotifications > 0 ? <span className=\"indicator-item badge badge-secondary badge-sm\">{noOfNotifications}</span> : null }\n                    </div>\n                </button> */}\n\n\n                {/* Profile icon, opening menu on click */}\n                {/* <div className=\"dropdown dropdown-end ml-4\">\n                    <label tabIndex={0} className=\"btn btn-ghost btn-circle avatar\">\n                        <div className=\"w-10 rounded-full\">\n                        <img src=\"https://placeimg.com/80/80/people\" alt=\"profile\" />\n                        </div>\n                    </label>\n                    <ul tabIndex={0} className=\"menu menu-compact dropdown-content mt-3 p-2 shadow bg-base-100 rounded-box w-52\">\n                        <li className=\"justify-between\">\n                        <Link to={'/app/settings-profile'}>\n                            Profile Settings\n                            <span className=\"badge\">New</span>\n                            </Link>\n                        </li>\n                        <li className=''><Link to={'/app/settings-billing'}>Bill History</Link></li>\n                        <div className=\"divider mt-0 mb-0\"></div>\n                        <li><a onClick={logoutUser}>Logout</a></li>\n                    </ul>\n                </div> */}\n            </div>\n            </div>\n\n        </>\n    )\n}\n\nexport default Header"
  },
  {
    "path": "webui/src/containers/Layout.js",
    "content": "import PageContent from \"./PageContent\"\nimport LeftSidebar from \"./LeftSidebar\"\nimport { useSelector, useDispatch } from 'react-redux'\nimport RightSidebar from './RightSidebar'\nimport { useEffect } from \"react\"\nimport  {  removeNotificationMessage } from \"../features/common/headerSlice\"\nimport {NotificationContainer, NotificationManager} from 'react-notifications';\nimport 'react-notifications/lib/notifications.css';\nimport ModalLayout from \"./ModalLayout\"\n\nfunction Layout(){\n  const dispatch = useDispatch()\n  const {newNotificationMessage, newNotificationStatus} = useSelector(state => state.header)\n\n\n  useEffect(() => {\n      if(newNotificationMessage !== \"\"){\n          if(newNotificationStatus === 1)NotificationManager.success(newNotificationMessage, 'Success')\n          if(newNotificationStatus === 0)NotificationManager.error( newNotificationMessage, 'Error')\n          dispatch(removeNotificationMessage())\n      }\n  }, [newNotificationMessage])\n\n    return(\n      <>\n        { /* Left drawer - containing page content and side bar (always open) */ }\n        <div className=\"drawer  lg:drawer-open\">\n            <input id=\"left-sidebar-drawer\" type=\"checkbox\" className=\"drawer-toggle\" />\n            <PageContent/>\n            <LeftSidebar />\n        </div>\n\n        { /* Right drawer - containing secondary content like notifications list etc.. */ }\n        {/* <RightSidebar /> */}\n\n\n        {/** Notification layout container */}\n        {/* <NotificationContainer /> */}\n\n      {/* Modal layout container */}\n        {/* <ModalLayout /> */}\n\n      </>\n    )\n}\n\nexport default Layout"
  },
  {
    "path": "webui/src/containers/LeftSidebar.js",
    "content": "import routes from '../routes/sidebar'\nimport { NavLink,  Routes, Link , useLocation} from 'react-router-dom'\nimport SidebarSubmenu from './SidebarSubmenu';\nimport XMarkIcon  from '@heroicons/react/24/outline/XMarkIcon'\nimport { useDispatch } from 'react-redux';\n\nfunction LeftSidebar(){\n    const location = useLocation();\n\n    const dispatch = useDispatch()\n\n\n    const close = (e) => {\n        document.getElementById('left-sidebar-drawer').click()\n    }\n\n    return(\n        <div className=\"drawer-side  z-30  \">\n            <label htmlFor=\"left-sidebar-drawer\" className=\"drawer-overlay\"></label> \n            <ul className=\"menu  pt-2 w-80 bg-base-100 min-h-full   text-base-content\">\n            <button className=\"btn btn-ghost bg-base-300  btn-circle z-50 top-0 right-0 mt-4 mr-2 absolute lg:hidden\" onClick={() => close()}>\n            <XMarkIcon className=\"h-5 inline-block w-5\"/>\n            </button>\n\n                <li className=\"mb-2 font-semibold text-xl\">\n                    \n                    <Link to={'/app/welcome'}><img className=\"mask w-30\" src=\"/transopt.png\" alt=\"TrasnOpt Logo\"/></Link> </li>\n                {\n                    routes.map((route, k) => {\n                        return(\n                            <li className=\"\" key={k}>\n                                {\n                                    route.submenu ? \n                                        <SidebarSubmenu {...route}/> : \n                                    (<NavLink\n                                        end\n                                        to={route.path}\n                                        className={({isActive}) => `${isActive ? 'font-semibold  bg-base-200 ' : 'font-normal'}`} >\n                                           {route.icon} {route.name}\n                                            {\n                                                location.pathname === route.path ? (<span className=\"absolute inset-y-0 left-0 w-1 rounded-tr-md rounded-br-md bg-primary \"\n                                                aria-hidden=\"true\"></span>) : null\n                                            }\n                                    </NavLink>)\n                                }\n                                \n                            </li>\n                        )\n                    })\n                }\n\n            </ul>\n        </div>\n    )\n}\n\nexport default LeftSidebar"
  },
  {
    "path": "webui/src/containers/ModalLayout.js",
    "content": "import { useEffect } from 'react'\nimport { MODAL_BODY_TYPES } from '../utils/globalConstantUtil'\nimport { useSelector, useDispatch } from 'react-redux'\nimport { closeModal } from '../features/common/modalSlice'\nimport AddLeadModalBody from '../features/leads/components/AddLeadModalBody'\nimport ConfirmationModalBody from '../features/common/components/ConfirmationModalBody'\n\n\nfunction ModalLayout(){\n\n\n    const {isOpen, bodyType, size, extraObject, title} = useSelector(state => state.modal)\n    const dispatch = useDispatch()\n\n    const close = (e) => {\n        dispatch(closeModal(e))\n    }\n\n\n\n    return(\n        <>\n        {/* The button to open modal */}\n\n            {/* Put this part before </body> tag */}\n            <div className={`modal ${isOpen ? \"modal-open\" : \"\"}`}>\n            <div className={`modal-box  ${size === 'lg' ? 'max-w-5xl' : ''}`}>\n                <button className=\"btn btn-sm btn-circle absolute right-2 top-2\" onClick={() => close()}>✕</button>\n                <h3 className=\"font-semibold text-2xl pb-6 text-center\">{title}</h3>\n\n\n                {/* Loading modal body according to different modal type */}\n                {\n                    {\n                             [MODAL_BODY_TYPES.LEAD_ADD_NEW] : <AddLeadModalBody closeModal={close} extraObject={extraObject}/>,\n                             [MODAL_BODY_TYPES.CONFIRMATION] : <ConfirmationModalBody extraObject={extraObject} closeModal={close}/>,\n                             [MODAL_BODY_TYPES.DEFAULT] : <div></div>\n                    }[bodyType]\n                }\n            </div>\n            </div>\n            </>\n    )\n}\n\nexport default ModalLayout"
  },
  {
    "path": "webui/src/containers/PageContent.js",
    "content": "import Header from \"./Header\"\nimport { BrowserRouter as Router, Route, Routes } from 'react-router-dom'\nimport routes from '../routes'\nimport { Suspense, lazy } from 'react'\nimport SuspenseContent from \"./SuspenseContent\"\nimport { useSelector } from 'react-redux'\nimport { useEffect, useRef } from \"react\"\n\nconst Page404 = lazy(() => import('../pages/protected/404'))\n\n\nfunction PageContent(){\n    const mainContentRef = useRef(null);\n    const {pageTitle} = useSelector(state => state.header)\n\n\n    // Scroll back to top on new page load\n    useEffect(() => {\n        mainContentRef.current.scroll({\n            top: 0,\n            behavior: \"smooth\"\n          });\n      }, [pageTitle])\n\n    return(\n        <div className=\"drawer-content flex flex-col \">\n            <Header/>\n            <main className=\"flex-1 overflow-y-auto md:pt-4 pt-4 px-6  bg-base-200\" ref={mainContentRef}>\n                <Suspense fallback={<SuspenseContent />}>\n                        <Routes>\n                            {\n                                routes.map((route, key) => {\n                                    return(\n                                        <Route\n                                            key={key}\n                                            exact={true}\n                                            path={`${route.path}`}\n                                            element={<route.component />}\n                                        />\n                                    )\n                                })\n                            }\n\n                            {/* Redirecting unknown url to 404 page */}\n                            <Route path=\"*\" element={<Page404 />} />\n                        </Routes>\n                </Suspense>\n                <div className=\"h-16\"></div>\n            </main>\n        </div> \n    )\n}\n\n\nexport default PageContent\n"
  },
  {
    "path": "webui/src/containers/RightSidebar.js",
    "content": "import XMarkIcon  from '@heroicons/react/24/solid/XMarkIcon'\nimport { useDispatch, useSelector } from 'react-redux'\nimport NotificationBodyRightDrawer from '../features/common/components/NotificationBodyRightDrawer'\nimport { closeRightDrawer } from '../features/common/rightDrawerSlice'\nimport { RIGHT_DRAWER_TYPES } from '../utils/globalConstantUtil'\nimport CalendarEventsBodyRightDrawer from '../features/calendar/CalendarEventsBodyRightDrawer'\n\n\nfunction RightSidebar(){\n\n    const {isOpen, bodyType, extraObject, header} = useSelector(state => state.rightDrawer)\n    const dispatch = useDispatch()\n\n    const close = (e) => {\n      dispatch(closeRightDrawer(e))\n    }\n\n      \n\n    return(\n        <div className={\" fixed overflow-hidden z-20 bg-gray-900 bg-opacity-25 inset-0 transform ease-in-out \" + (isOpen ? \" transition-opacity opacity-100 duration-500 translate-x-0  \" : \" transition-all delay-500 opacity-0 translate-x-full  \")}>\n      \n            <section className={ \"w-80 md:w-96  right-0 absolute bg-base-100 h-full shadow-xl delay-400 duration-500 ease-in-out transition-all transform  \" + (isOpen ? \" translate-x-0 \" : \" translate-x-full \")}>\n                \n                    <div className=\"relative  pb-5 flex flex-col  h-full\">\n                        \n                        {/* Header */}\n                        <div className=\"navbar   flex pl-4 pr-4   shadow-md \">\n                            <button className=\"float-left btn btn-circle btn-outline btn-sm\" onClick={() => close()}>\n                            <XMarkIcon className=\"h-5 w-5\"/>\n                            </button>\n                            <span className=\"ml-2 font-bold text-xl\">{header}</span>\n                        </div>\n\n\n                        {/* ------------------ Content Start ------------------ */}\n                        <div className=\"overflow-y-scroll pl-4 pr-4\">\n                            <div className=\"flex flex-col w-full\">\n                            {/* Loading drawer body according to different drawer type */}\n                            {\n                                {\n                                        [RIGHT_DRAWER_TYPES.NOTIFICATION] : <NotificationBodyRightDrawer {...extraObject} closeRightDrawer={close}/>,\n                                        [RIGHT_DRAWER_TYPES.CALENDAR_EVENTS] : <CalendarEventsBodyRightDrawer {...extraObject} closeRightDrawer={close}/>,\n                                        [RIGHT_DRAWER_TYPES.DEFAULT] : <div></div>\n                                }[bodyType]\n                            }\n                                \n                            </div>\n                        </div>\n                        {/* ------------------ Content End ------------------ */}\n                    </div>\n\n            </section>\n\n            <section className=\" w-screen h-full cursor-pointer \" onClick={() => close()} ></section>\n        </div>\n    )\n}\n\nexport default RightSidebar"
  },
  {
    "path": "webui/src/containers/SidebarSubmenu.js",
    "content": "import ChevronDownIcon from  '@heroicons/react/24/outline/ChevronDownIcon'\nimport {useEffect, useState} from 'react'\nimport { Link, useLocation } from 'react-router-dom'\n\n\nfunction SidebarSubmenu({submenu, name, icon}){\n    const location = useLocation()\n    const [isExpanded, setIsExpanded] = useState(false)\n\n\n    /** Open Submenu list if path found in routes, this is for directly loading submenu routes  first time */\n    useEffect(() => {\n        if(submenu.filter(m => {return m.path === location.pathname})[0])setIsExpanded(true)\n    }, [])\n\n    return (\n        <div className='flex flex-col'>\n\n            {/** Route header */}\n            <div className='w-full block' onClick={() => setIsExpanded(!isExpanded)}>\n                {icon} {name} \n                <ChevronDownIcon className={'w-5 h-5 mt-1 float-right delay-400 duration-500 transition-all  ' + (isExpanded ? 'rotate-180' : '')}/>\n            </div>\n\n            {/** Submenu list */}\n            <div className={` w-full `+ (isExpanded ? \"\" : \"hidden\")}>\n                <ul className={`menu menu-compact`}>\n                {\n                    submenu.map((m, k) => {\n                        return(\n                            <li key={k}>\n                                <Link to={m.path}>\n                                    {m.icon} {m.name}\n                                    {\n                                            location.pathname == m.path ? (<span className=\"absolute mt-1 mb-1 inset-y-0 left-0 w-1 rounded-tr-md rounded-br-md bg-primary \"\n                                                aria-hidden=\"true\"></span>) : null\n                                    }\n                                </Link>\n                            </li>\n                        )\n                    })\n                }\n                </ul>\n            </div>\n        </div>\n    )\n}\n\nexport default SidebarSubmenu"
  },
  {
    "path": "webui/src/containers/SuspenseContent.js",
    "content": "function SuspenseContent(){\n    return(\n        <div className=\"w-full h-screen text-gray-300 dark:text-gray-200 bg-base-100\">\n            Loading...\n        </div>\n    )\n}\n\nexport default SuspenseContent"
  },
  {
    "path": "webui/src/features/algorithm/components/OptTable.js",
    "content": "import React from \"react\";\nimport {\n  Table,\n} from \"reactstrap\";\n// import { Table } from 'react-bootstrap';\n\n// function OptTable({ optimizer }) {\n//     return (\n//         <Table lg={12} md={12} sm={12} striped>\n//             <thead>\n//                 <tr className=\"fs-sm\">\n//                     <th>#</th>\n//                     <th>Narrow Search Space</th>\n//                     <th>Initialization</th>\n//                     <th>Pre-train</th>\n//                     <th>Surrogate Model</th>\n//                     <th>Acquisition Function</th>\n//                     <th>Normalizer</th>\n//                 </tr>\n//             </thead>\n//             <tbody>\n//                 <tr key=\"Name\">\n//                     <td>Name</td>\n//                     <td>{optimizer.SpaceRefiner}</td>\n//                     <td>{optimizer.Sampler}</td>\n//                     <td>{optimizer.Pretrain}</td>\n//                     <td>{optimizer.Model}</td>\n//                     <td>{optimizer.ACF}</td>\n//                     <td>{optimizer.Normalizer}</td>\n//                 </tr>\n//                 <tr key=\"Parameters\">\n//                     <td>Parameters</td>\n//                     <td>{optimizer.SpaceRefinerParameters}</td>\n//                     <td>InitNum:{optimizer.SamplerInitNum},{optimizer.SamplerParameters}</td>\n//                     <td>{optimizer.PretrainParameters}</td>\n//                     <td>{optimizer.ModelParameters}</td>\n//                     <td>{optimizer.ACFParameters}</td>\n//                     <td>{optimizer.NormalizerParameters}</td>\n//                 </tr>\n//             </tbody>\n//         </Table>\n//     );\n// }\n\n\n\nfunction OptTable({ optimizer }) {\n    return (\n        <Table lg={12} md={12} sm={12} striped>\n            <thead>\n                <tr className=\"fs-sm\">\n                    <th>#</th>\n                    <th>Name</th>\n                    <th>Parameters</th>\n                </tr>\n            </thead>\n            <tbody>\n                <tr key=\"Name\">\n                    <td>Prune Search Space</td>\n                    <td>{optimizer.SpaceRefiner}</td>\n                    <td>{optimizer.SpaceRefinerParameters}</td>\n                </tr>\n\n                <tr key=\"Name\">\n                    <td>Initialization</td>\n                    <td>{optimizer.Sampler}</td>\n                    <td>The number of Initialization:{optimizer.SamplerInitNum},{optimizer.SamplerParameters}</td>\n                </tr>\n\n                <tr key=\"Name\">\n                    <td>Pre-train</td>\n                    <td>{optimizer.Pretrain}</td>\n                    <td>{optimizer.PretrainParameters}</td>\n\n                </tr>\n                <tr key=\"Name\">\n                    <td>Surrogate Model</td>\n                    <td>{optimizer.Model}</td>\n                    <td>{optimizer.ModelParameters}</td>\n\n                </tr>\n                <tr key=\"Name\">\n                    <td>Acquisition Function</td>\n                    <td>{optimizer.ACF}</td>\n                    <td>{optimizer.ACFParameters}</td>\n                </tr>\n                \n                <tr key=\"Name\">\n                    <td>Normalizer</td>\n                    <td>{optimizer.Normalizer}</td>\n                    <td>{optimizer.NormalizerParameters}</td>\n\n                </tr>\n\n            </tbody>\n        </Table>\n    );\n}\n\n\nexport default OptTable;\n\n"
  },
  {
    "path": "webui/src/features/algorithm/components/SelectPlugin.js",
    "content": "import React, { useState } from \"react\";\n\nimport {\n    Button,\n    Form,\n    Input,\n    Select,\n    ConfigProvider,\n    Modal,\n} from \"antd\";\n\nfunction SelectAlgorithm({SpaceRefiner, Sampler, Pretrain, Model, ACF, DataSelector, Normalizer, updateTable}) {\n    const [form] = Form.useForm()\n\n    const onFinish = (values) => {\n        // 构造要发送到后端的数据\n        const messageToSend = values;\n        updateTable(messageToSend)\n        console.log('Request data:', messageToSend);\n        // 向后端发送请求...\n        fetch('http://localhost:5001/api/configuration/select_algorithm', {\n            method: 'POST',\n            headers: {\n              'Content-Type': 'application/json',\n            },\n            body: JSON.stringify(messageToSend),\n          })\n          .then(response => {\n            if (!response.ok) {\n              throw new Error('Network response was not ok');\n            } \n            return response.json();\n          })\n          .then(succeed => {\n            console.log('Message from back-end:', succeed);\n            Modal.success({\n              title: 'Information',\n              content: 'Submit successfully!'\n            })\n          })\n          .catch((error) => {\n            console.error('Error sending message:', error);\n            var errorMessage = error.error;\n            Modal.error({\n              title: 'Information',\n              content: 'error:' + errorMessage\n            })\n          });\n      };\n\n    return (\n        <ConfigProvider\n          theme={{\n            components: {\n              Input: {\n                addonBg:\"black\"\n              },\n            },\n          }}        \n        >\n        <Form\n            form={form}\n            name=\"Algorithm\"\n            onFinish={onFinish}\n            style={{width:\"100%\"}}\n            autoComplete=\"off\"\n            initialValues={{\n              SpaceRefiner: SpaceRefiner[0].name,\n              SpaceRefinerParameters: '',\n              SpaceRefinerDataSelector: 'None',\n              SpaceRefinerDataSelectorParameters: '',\n              Sampler: Sampler[0].name,\n              SamplerParameters: '',\n              SamplerInitNum: '11',\n              SamplerDataSelector: 'None',\n              SamplerDataSelectorParameters: '',\n              Pretrain: Pretrain[0].name,\n              PretrainParameters: '',\n              PretrainDataSelector: 'None',\n              PretrainDataSelectorParameters: '',\n              Model: Model[0].name,\n              ModelParameters: '',\n              ModelDataSelector: 'None',\n              ModelDataSelectorParameters: '',\n              ACF: ACF[0].name,\n              ACFParameters: '',\n              ACFDataSelector: 'None',\n              ACFDataSelectorParameters: '',\n              Normalizer: Normalizer[0].name,\n              NormalizerParameters: '',\n              NormalizerDataSelector: 'None',\n              NormalizerDataSelectorParameters: '',\n            }}\n        >\n          <div>\n            <div>\n                <h5 style={{color:\"#111\"}}>\n                  <span className=\"fw-semi-bold\">Search Space Prune</span>\n                </h5>\n            </div>\n            <div style={{ display: 'flex', alignItems: 'baseline' }}>\n              <Form.Item\n                name={'SpaceRefiner'}\n                style={{ marginRight: 8 , width: 300}}\n              >\n                <Select \n                  placeholder=\"name\"\n                  defaultValue={SpaceRefiner[0].name}\n                  options={SpaceRefiner.map(item => ({ value: item.name }))}\n                />\n              </Form.Item>\n              <Form.Item\n                name={'SpaceRefinerParameters'}\n                style={{ flex: 1 , marginRight: 8}}\n              >\n                <Input placeholder=\"Parameters\"/>\n              </Form.Item>\n              {/* <h7 style={{color:\"white\", marginRight:8}}>DataSelector: </h7> */}\n              <Form.Item\n                name={'SpaceRefinerDataSelector'}\n              >\n                \n              </Form.Item>\n              <Form.Item\n                name={'SpaceRefinerDataSelectorParameters'}\n              >\n                \n              </Form.Item>\n            </div>\n\n            <div>\n                <h5 style={{color:\"#111\"}}>\n                  <span className=\"fw-semi-bold\">Initialization</span>\n                </h5>\n            </div>\n            <div style={{ display: 'flex', alignItems: 'baseline' }}>\n              <Form.Item\n                name={'Sampler'}\n                style={{ marginRight: 8 , width: 300}}\n              >\n                <Select\n                  placeholder=\"name\"\n                  defaultValue={Sampler[0].name}\n                  options={Sampler.map(item => ({ value: item.name }))}\n                />\n              </Form.Item>\n              <Form.Item\n                name={'SamplerParameters'}\n                style={{ flex: 1 }}\n              >\n                <Input placeholder=\"Parameters\"/>\n              </Form.Item>\n              <Form.Item\n                name={'SamplerInitNum'}\n                style={{ flex: 1, marginLeft: 8, marginRight: 8}}\n              >\n                <Input placeholder=\"Initial Sample Size\"/>\n              </Form.Item>\n              {/* <h7 style={{color:\"white\", marginRight:8}}>DataSelector: </h7> */}\n              <Form.Item\n                name={'SamplerDataSelector'}\n              >\n              </Form.Item>\n              <Form.Item\n                name={'SamplerDataSelectorParameters'}\n              >\n              </Form.Item>\n            </div>\n\n            <div>\n                <h5 style={{color:\"#111\"}}>\n                  <span className=\"fw-semi-bold\">Pre-train</span>\n                </h5>\n            </div>\n            <div style={{ display: 'flex', alignItems: 'baseline' }}>\n              <Form.Item\n                name={'Pretrain'}\n                style={{ marginRight: 8 , width: 300}}\n              >\n                <Select\n                  placeholder=\"name\"\n                  defaultValue={Pretrain[0].name}\n                  options={Pretrain.map(item => ({ value: item.name }))}\n                />\n              </Form.Item>\n              <Form.Item\n                name={'PretrainParameters'}\n                style={{ flex: 1 , marginRight: 8 }}\n              >\n                <Input placeholder=\"Parameters\"/>\n              </Form.Item>\n              {/* <h7 style={{color:\"white\", marginRight:8}}>DataSelector: </h7> */}\n              <Form.Item\n                name={'PretrainDataSelector'}\n              >\n              </Form.Item>\n              <Form.Item\n                name={'PretrainDataSelectorParameters'}\n              >\n              </Form.Item>\n            </div>\n\n            <div>\n                <h5 style={{color:\"#111\"}}>\n                  <span className=\"fw-semi-bold\">Surrogate Model</span>\n                </h5>\n            </div>\n            <div style={{ display: 'flex', alignItems: 'baseline' }}>\n              <Form.Item\n                name={'Model'}\n                style={{ marginRight: 8 , width: 300}}\n              >\n                <Select\n                  placeholder=\"name\"\n                  defaultValue={Model[0].name}\n                  options={Model.map(item => ({ value: item.name }))}\n                />\n              </Form.Item>\n              <Form.Item\n                name={'ModelParameters'}\n                style={{ flex: 1, marginRight: 8}}\n              >\n                <Input placeholder=\"Parameters\" />\n              </Form.Item>\n              {/* <h7 style={{color:\"white\", marginRight:8}}>DataSelector: </h7> */}\n              <Form.Item\n                name={'ModelDataSelector'}\n              >\n              </Form.Item>\n              <Form.Item\n                name={'ModelDataSelectorParameters'}\n              >\n              </Form.Item>\n            </div>\n\n            <div>\n                <h5 style={{color:\"#111\"}}>\n                  <span className=\"fw-semi-bold\">Acquisition Function</span>\n                </h5>\n            </div>\n            <div style={{ display: 'flex', alignItems: 'baseline' }}>\n              <Form.Item\n                name={'ACF'}\n                style={{ marginRight: 8 , width: 300}}\n              >\n                <Select\n                  placeholder=\"name\"\n                  defaultValue={ACF[0].name}\n                  options={ACF.map(item => ({ value: item.name }))}\n                />\n              </Form.Item>\n              <Form.Item\n                name={'ACFParameters'}\n                style={{ flex: 1, marginRight: 8}}\n              >\n                <Input placeholder=\"Parameters\" />\n              </Form.Item>\n              {/* <h7 style={{color:\"white\", marginRight:8}}>DataSelector: </h7> */}\n              <Form.Item\n                name={'ACFDataSelector'}\n              >\n              </Form.Item>\n              <Form.Item\n                name={'ACFDataSelectorParameters'}\n              >\n              </Form.Item>\n            </div>\n\n            <div>\n                <h5 style={{color:\"#111\"}}>\n                  <span className=\"fw-semi-bold\">Normalizer</span>\n                </h5>\n            </div>\n            <div style={{ display: 'flex', alignItems: 'baseline' }}>\n              <Form.Item\n                name={'Normalizer'}\n                style={{ marginRight: 8 , width: 300}}\n              >\n                <Select\n                  placeholder=\"name\"\n                  defaultValue={Normalizer[0].name}\n                  options={Normalizer.map(item => ({ value: item.name }))}\n                />\n              </Form.Item>\n              <Form.Item\n                name={'NormalizerParameters'}\n                style={{ flex: 1, marginRight: 8}}\n              >\n                <Input placeholder=\"Parameters\"/>\n              </Form.Item>\n              {/* <h7 style={{color:\"white\", marginRight:8}}>DataSelector: </h7> */}\n              <Form.Item\n                name={'NormalizerDataSelector'}\n              >\n              </Form.Item>\n              <Form.Item\n                name={'NormalizerDataSelectorParameters'}\n              >\n              </Form.Item>\n            </div>\n          </div>\n\n          <Form.Item style={{marginTop:10}}>\n            <Button type=\"primary\" htmlType=\"submit\" style={{width:\"120px\"}}>\n              Submit\n            </Button>\n          </Form.Item>\n        </Form>\n        </ConfigProvider>\n    )\n}\n\nexport default SelectAlgorithm;"
  },
  {
    "path": "webui/src/features/algorithm/index.js",
    "content": "import React from \"react\";\n\nimport TitleCard from \"../../components/Cards/TitleCard\"\n\nimport SelectPlugins from \"./components/SelectPlugin\";\nimport OptTable from \"./components/OptTable\";\n\n\nclass Algorithm extends React.Component {\n  constructor(props) {\n    super(props);\n    this.state = {\n      TasksData: [],\n      SpaceRefiner: [],\n      Sampler: [],\n      Pretrain: [],\n      Model: [],\n      ACF: [],\n      DataSelector: [],\n      Normalizer: [],\n      optimizer: {},\n    };\n  }\n\n  updateTable = (newOptimizer) => {\n    this.setState({ optimizer: newOptimizer });\n  }\n\n  render() {\n    if (this.state.TasksData.length === 0) {\n      const messageToSend = {\n        action: 'ask for basic information',\n      }\n      fetch('http://localhost:5001/api/configuration/basic_information', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(data => {\n        console.log('Message from back-end:', data);\n        this.setState({ TasksData: data.TasksData,\n                        SpaceRefiner: data.SpaceRefiner,\n                        Sampler: data.Sampler,\n                        Pretrain: data.Pretrain,\n                        Model: data.Model,\n                        ACF: data.ACF,\n                        DataSelector: data.DataSelector,\n                        Normalizer: data.Normalizer,\n                      });\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n      });\n\n      fetch('http://localhost:5001/api/RunPage/get_info', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(data => {\n        console.log('Configuration infomation from back-end:', data);\n        this.setState({ optimizer: data.optimizer });\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n      });\n    } else {\n      return (\n        <div>\n          <div className=\"grid mt-4 grid-cols-1 lg:grid-cols-[50%_50%] gap-6\">\n                <TitleCard>\n                  <SelectPlugins SpaceRefiner={this.state.SpaceRefiner}\n                                    Sampler={this.state.Sampler}\n                                    Pretrain={this.state.Pretrain}\n                                    Model={this.state.Model}\n                                    ACF={this.state.ACF}\n                                    DataSelector={this.state.DataSelector}\n                                    Normalizer={this.state.Normalizer}\n                                    updateTable={this.updateTable}\n                  />\n                </TitleCard>\n\n                <TitleCard\n                  title={\n                    <h5>\n                      <span className=\"fw-semi-bold\">Composition</span>\n                    </h5>\n                  }\n                  collapse\n                >\n                  <OptTable optimizer={this.state.optimizer} />\n                </TitleCard>\n            </div>\n        </div>\n      );\n    }\n  }\n}\n\nexport default Algorithm;"
  },
  {
    "path": "webui/src/features/analytics/charts/Box.js",
    "content": "import React from 'react';\nimport * as echarts from 'echarts';\nimport ReactECharts from 'echarts-for-react';\nimport BoxData from './data/BoxData.json';\nimport my_theme from './my_theme.json';\n\necharts.registerTheme('my_theme', my_theme.theme);\n\nfunction Box({ BoxData }) {\n  // Extract labels and data\n  const dataLabel = Object.keys(BoxData);\n  const data = Object.values(BoxData);\n\n  // Configure the ECharts option\n  const option = {\n    dataset: [\n      {\n        source: data,\n      },\n      {\n        transform: {\n          type: 'boxplot',\n          config: {\n            itemNameFormatter: function (value) {\n              return dataLabel[value.value];\n            },\n          },\n        },\n      },\n      {\n        fromDatasetIndex: 1,\n        fromTransformResult: 1,\n      },\n    ],\n    tooltip: {\n      trigger: 'item',\n      axisPointer: {\n        type: 'shadow',\n      },\n    },\n    toolbox: {\n      feature: {\n        saveAsImage: {},\n      },\n    },\n    grid: {\n      left: '10%',\n      right: '10%',\n      bottom: '15%',\n    },\n    xAxis: {\n      type: 'category',\n      // boundaryGap: true,\n      nameGap: 30,\n      axisLabel: {\n        color: '#ffffff',\n      },\n      lineStyle: {\n        color: 'black',\n      },\n    },\n    yAxis: {\n      type: 'value',\n      name: 'value',\n      lineStyle: {\n        color: 'black',\n      },\n      axisLabel: {\n        color: '#ffffff',\n      },\n      min: 'dataMin', // Set min to auto-scale\n      max: 'dataMax', // Set max to auto-scale\n    },\n    series: [\n      {\n        name: 'boxplot',\n        type: 'boxplot',\n        datasetIndex: 1,\n        itemStyle: {\n          color: '#2EC7C9',\n        },\n      },\n      {\n        name: 'outlier',\n        type: 'scatter',\n        datasetIndex: 2,\n        symbol: 'circle',\n      },\n    ],\n  };\n\n  return (\n    <ReactECharts\n      option={option}\n      style={{ height: 500 }}\n      theme=\"my_theme\"\n    />\n  );\n}\n\nexport default Box;\n"
  },
  {
    "path": "webui/src/features/analytics/charts/Trajectory.js",
    "content": "import React, { Component } from 'react';\nimport {\n  Chart,\n  Area,\n  Line,\n  Tooltip,\n  View,\n} from 'bizcharts';\n\nconst scale = {\n  y: { \n    sync: true,\n    nice: true,\n  },\n  FEs: {\n    type: 'linear',\n    nice: true,\n  },\n};\n\nconst color = [\n  \"#2ec7c9\",\n  \"#b6a2de\",\n  \"#5ab1ef\",\n  \"#ffb980\",\n  \"#d87a80\",\n  \"#8d98b3\",\n  \"#e5cf0d\",\n  \"#97b552\",\n  \"#95706d\",\n  \"#dc69aa\",\n  \"#07a2a4\",\n  \"#9a7fd1\",\n  \"#588dd5\",\n  \"#f5994e\",\n  \"#c05050\",\n  \"#59678c\",\n  \"#c9ab00\",\n  \"#7eb00a\",\n  \"#6f5553\",\n  \"#c14089\"\n]\n\nclass Trajectory extends Component {\n  constructor(props) {\n    super(props);\n  }\n  \n  render() {\n    const { TrajectoryData } = this.props\n\n    return (\n      <Chart id=\"chart\" scale={scale} height={400} autoFit>\n        <Tooltip shared />\n        {TrajectoryData.map((item, index) => (\n          <div>\n            <View key={index} data={item.average} scale={{ y: { alias: `${item.name}` } }}>\n              <Line position=\"FEs*y\" color={color[index]} />\n            </View>\n            <View key={index} data={item.uncertainty} scale={{ y: { alias: `${item.name}-uncertainty` } }}>\n              <Area position=\"FEs*y\" color={color[index]} shape=\"smooth\" />\n            </View>\n          </div>\n        ))}\n      </Chart>\n    );\n  }\n}\n\nexport default Trajectory;"
  },
  {
    "path": "webui/src/features/analytics/charts/my_theme.json",
    "content": "{\n    \"version\": 1,\n    \"themeName\": \"macarons\",\n    \"theme\": {\n        \"seriesCnt\": \"4\",\n        \"titleColor\": \"#ffffff\",\n        \"subtitleColor\": \"rgba(170,170,170,0.92)\",\n        \"textColorShow\": false,\n        \"textColor\": \"#333\",\n        \"markTextColor\": \"#ffffff\",\n        \"color\": [\n            \"#2ec7c9\",\n            \"#b6a2de\",\n            \"#5ab1ef\",\n            \"#ffb980\",\n            \"#d87a80\",\n            \"#8d98b3\",\n            \"#e5cf0d\",\n            \"#97b552\",\n            \"#95706d\",\n            \"#dc69aa\",\n            \"#07a2a4\",\n            \"#9a7fd1\",\n            \"#588dd5\",\n            \"#f5994e\",\n            \"#c05050\",\n            \"#59678c\",\n            \"#c9ab00\",\n            \"#7eb00a\",\n            \"#6f5553\",\n            \"#c14089\"\n        ],\n        \"borderColor\": \"#ffffff\",\n        \"borderWidth\": \"0\",\n        \"visualMapColor\": [\n            \"#5ab1ef\",\n            \"#e0ffff\"\n        ],\n        \"legendTextColor\": \"#ffffff\",\n        \"kColor\": \"#d87a80\",\n        \"kColor0\": \"#2ec7c9\",\n        \"kBorderColor\": \"#d87a80\",\n        \"kBorderColor0\": \"#2ec7c9\",\n        \"kBorderWidth\": 1,\n        \"lineWidth\": 2,\n        \"symbolSize\": 3,\n        \"symbol\": \"emptyCircle\",\n        \"symbolBorderWidth\": 1,\n        \"lineSmooth\": true,\n        \"graphLineWidth\": 1,\n        \"graphLineColor\": \"#aaaaaa\",\n        \"mapLabelColor\": \"#d87a80\",\n        \"mapLabelColorE\": \"rgb(100,0,0)\",\n        \"mapBorderColor\": \"#eeeeee\",\n        \"mapBorderColorE\": \"#444\",\n        \"mapBorderWidth\": 0.5,\n        \"mapBorderWidthE\": 1,\n        \"mapAreaColor\": \"#dddddd\",\n        \"mapAreaColorE\": \"rgba(254,153,78,1)\",\n        \"axes\": [\n            {\n                \"type\": \"all\",\n                \"name\": \"通用坐标轴\",\n                \"axisLineShow\": true,\n                \"axisLineColor\": \"#eeeeee\",\n                \"axisTickShow\": true,\n                \"axisTickColor\": \"#eeeeee\",\n                \"axisLabelShow\": true,\n                \"axisLabelColor\": \"#eeeeee\",\n                \"splitLineShow\": true,\n                \"splitLineColor\": [\n                    \"#aaaaaa\"\n                ],\n                \"splitAreaShow\": false,\n                \"splitAreaColor\": [\n                    \"#eeeeee\"\n                ]\n            },\n            {\n                \"type\": \"category\",\n                \"name\": \"类目坐标轴\",\n                \"axisLineShow\": true,\n                \"axisLineColor\": \"#ffffff\",\n                \"axisTickShow\": true,\n                \"axisTickColor\": \"#ffffff\",\n                \"axisLabelShow\": true,\n                \"axisLabelColor\": \"#ffffff\",\n                \"splitLineShow\": false,\n                \"splitLineColor\": [\n                    \"#ffffff\"\n                ],\n                \"splitAreaShow\": false,\n                \"splitAreaColor\": [\n                    \"rgba(250,250,250,0.3)\",\n                    \"rgba(200,200,200,0.3)\"\n                ]\n            },\n            {\n                \"type\": \"value\",\n                \"name\": \"数值坐标轴\",\n                \"axisLineShow\": true,\n                \"axisLineColor\": \"#ffffff\",\n                \"axisTickShow\": true,\n                \"axisTickColor\": \"#ffffff\",\n                \"axisLabelShow\": true,\n                \"axisLabelColor\": \"#ffffff\",\n                \"splitLineShow\": true,\n                \"splitLineColor\": [\n                    \"#ffffff\"\n                ],\n                \"splitAreaShow\": false,\n                \"splitAreaColor\": [\n                    \"rgba(250,250,250,0.3)\",\n                    \"rgba(200,200,200,0.3)\"\n                ]\n            },\n            {\n                \"type\": \"log\",\n                \"name\": \"对数坐标轴\",\n                \"axisLineShow\": true,\n                \"axisLineColor\": \"#ffffff\",\n                \"axisTickShow\": true,\n                \"axisTickColor\": \"#ffffff\",\n                \"axisLabelShow\": true,\n                \"axisLabelColor\": \"#ffffff\",\n                \"splitLineShow\": true,\n                \"splitLineColor\": [\n                    \"#ffffff\"\n                ],\n                \"splitAreaShow\": false,\n                \"splitAreaColor\": [\n                    \"rgba(250,250,250,0.3)\",\n                    \"rgba(200,200,200,0.3)\"\n                ]\n            },\n            {\n                \"type\": \"time\",\n                \"name\": \"时间坐标轴\",\n                \"axisLineShow\": true,\n                \"axisLineColor\": \"#ffffff\",\n                \"axisTickShow\": true,\n                \"axisTickColor\": \"#ffffff\",\n                \"axisLabelShow\": true,\n                \"axisLabelColor\": \"#ffffff\",\n                \"splitLineShow\": true,\n                \"splitLineColor\": [\n                    \"#ffffff\"\n                ],\n                \"splitAreaShow\": false,\n                \"splitAreaColor\": [\n                    \"rgba(250,250,250,0.3)\",\n                    \"rgba(200,200,200,0.3)\"\n                ]\n            }\n        ],\n        \"axisSeperateSetting\": true,\n        \"toolboxColor\": \"#ffffff\",\n        \"toolboxEmphasisColor\": \"#18a4a6\",\n        \"tooltipAxisColor\": \"#18a4a6\",\n        \"tooltipAxisWidth\": \"2\",\n        \"timelineLineColor\": \"#008acd\",\n        \"timelineLineWidth\": 1,\n        \"timelineItemColor\": \"#ffffff\",\n        \"timelineItemColorE\": \"#006cdd\",\n        \"timelineCheckColor\": \"#2ec7c9\",\n        \"timelineCheckBorderColor\": \"#2ec7c9\",\n        \"timelineItemBorderWidth\": 1,\n        \"timelineControlColor\": \"#008acd\",\n        \"timelineControlBorderColor\": \"#008acd\",\n        \"timelineControlBorderWidth\": 0.5,\n        \"timelineLabelColor\": \"#008acd\",\n        \"datazoomBackgroundColor\": \"rgba(47,69,84,0)\",\n        \"datazoomDataColor\": \"#efefff\",\n        \"datazoomFillColor\": \"rgba(182,162,222,0.2)\",\n        \"datazoomHandleColor\": \"#008acd\",\n        \"datazoomHandleWidth\": \"100\",\n        \"datazoomLabelColor\": \"#333333\"\n    }\n}"
  },
  {
    "path": "webui/src/features/analytics/components/LineChart.js",
    "content": "import React from 'react';\nimport { Line } from 'react-chartjs-2';\nimport TitleCard from '../../../components/Cards/TitleCard';\n\nimport {\n  Chart as ChartJS,\n  LineElement,\n  PointElement,\n  LinearScale,\n  Title,\n  CategoryScale,\n  Tooltip,\n  Legend,\n  Filler,\n} from 'chart.js';\n\n// 注册图表组件\nChartJS.register(LineElement, PointElement, LinearScale, CategoryScale, Title, Tooltip, Legend, Filler);\n\nconst color = [\n  \"#2ec7c9\", \"#b6a2de\", \"#5ab1ef\", \"#ffb980\", \"#d87a80\",\n  \"#8d98b3\", \"#e5cf0d\", \"#97b552\", \"#95706d\", \"#dc69aa\",\n  \"#07a2a4\", \"#9a7fd1\", \"#588dd5\", \"#f5994e\", \"#c05050\",\n  \"#59678c\", \"#c9ab00\", \"#7eb00a\", \"#6f5553\", \"#c14089\"\n];\n\nconst Trajectory = ({ TrajectoryData }) => {\n  // 默认值处理，防止 undefined 错误\n  const data = {\n    datasets: TrajectoryData ? TrajectoryData.flatMap((item, index) => [\n      // 线图数据集\n      {\n        label: `${item.name}`, // 数据集名称\n        data: item.average.map(point => ({ x: point.FEs, y: point.y })), // 将数据点映射为 {x, y}\n        borderColor: color[index % color.length], // 使用颜色数组中的颜色\n        backgroundColor: color[index % color.length], // 线条颜色\n        tension: 0.4, // 曲线平滑度\n        fill: false, // 不填充区域\n      },\n      // 不确定性区域数据集\n      {\n        label: `${item.name} - uncertainty`, // 数据集名称\n        data: item.uncertainty.map(point => ({ x: point.FEs, y: point.y })), // 不确定性区域数据\n        borderColor: color[index % color.length], // 边框颜色\n        backgroundColor: `${color[index % color.length]}33`, // 透明背景色表示不确定性\n        tension: 0.4, // 曲线平滑度\n        fill: true, // 填充区域\n      },\n    ]) : [],\n  };\n\n  const options = {\n    scales: {\n      x: {\n        type: 'linear', // X轴类型为线性\n        title: {\n          display: true,\n          text: 'FEs', // X轴标题\n        },\n      },\n      y: {\n        title: {\n          display: true,\n          text: 'Y', // Y轴标题\n        },\n        beginAtZero: true,\n        nice: true,\n        sync: true,\n      },\n    },\n    plugins: {\n      tooltip: {\n        mode: 'index', // 工具提示框\n        intersect: false,\n        shared: true,\n      },\n      legend: {\n        display: true,\n        position: 'top',\n        labels: {\n          filter: (legendItem) => !legendItem.text.includes('uncertainty'), // 过滤掉包含 \"uncertainty\" 的标签\n        },\n      },\n    },\n  };\n\n  return (\n    <TitleCard title={\"Convergence Trajectory\"}>\n      <Line data={data} options={options}/>\n    </TitleCard>\n    // <div style={{ width: '100%', height: '400px' }}>\n    //   <Line data={data} options={options} />\n    // </div>\n  );\n};\n\nexport default Trajectory;\n"
  },
  {
    "path": "webui/src/features/analytics/components/SelectTask.js",
    "content": "import React, { useState } from \"react\";\n\nimport { MinusCircleOutlined, PlusOutlined } from '@ant-design/icons';\nimport {\n    Button,\n    Form,\n    Input,\n    Space, \n    Select,\n    Modal,\n} from \"antd\";\n\n\nfunction ASearch({key, name, restField, remove, selections}) {\n\n    return (\n        <Space key={key} className=\"space\" style={{ marginBottom: 1 }} align=\"baseline\">\n           <Form.Item\n             {...restField}\n             name={[name, 'TaskName']}\n           >\n             <Input placeholder=\"TaskName\" style={{ minWidth: 93 }} />\n           </Form.Item>\n\n           <Form.Item\n             {...restField}\n             name={[name, 'NumObjs']}\n           >\n             <Input placeholder=\"NumObjs\" style={{ minWidth: 83 }} />\n           </Form.Item>\n\n           <Form.Item\n             {...restField}\n             name={[name, 'NumVars']}\n           >\n             <Input placeholder=\"NumVars\" style={{ minWidth: 83 }} />\n           </Form.Item>\n\n           <Form.Item\n             {...restField}\n             name={[name, 'Fidelity']}\n           >\n             <Input placeholder=\"Fidelity\" style={{ minWidth: 70 }} />\n           </Form.Item>\n\n           <Form.Item\n             {...restField}\n             name={[name, 'Workload']}\n           >\n             <Input placeholder=\"Workload\" style={{ minWidth: 85 }} />\n           </Form.Item>\n\n           <Form.Item\n             {...restField}\n             name={[name, 'Seed']}\n           >\n             <Input placeholder=\"Seed\" style={{ minWidth: 60 }} />\n           </Form.Item>\n           \n           <Form.Item\n             {...restField}\n             name={[name, 'Refiner']}\n           >\n             <Select\n               placeholder=\"Refiner\"\n               options={selections.Refiner.map(item => ({value: item}))}\n             />\n           </Form.Item>\n\n           <Form.Item\n             {...restField}\n             name={[name, 'Sampler']}\n           >\n             <Select\n               placeholder=\"Sampler\"\n               options={selections.Sampler.map(item => ({value: item}))}\n             />\n           </Form.Item>\n\n           <Form.Item\n             {...restField}\n             name={[name, 'Pretrain']}\n           >\n             <Select\n               placeholder=\"Pretrain\"\n               options={selections.Pretrain.map(item => ({value: item}))}\n             />\n           </Form.Item>\n\n           <Form.Item\n             {...restField}\n             name={[name, 'Model']}\n           >\n             <Select\n               placeholder=\"Model\"\n               options={selections.Model.map(item => ({value: item}))}\n             />\n           </Form.Item>\n\n           <Form.Item\n             {...restField}\n             name={[name, 'ACF']}\n           >\n             <Select\n               placeholder=\"ACF\"\n               options={selections.ACF.map(item => ({value: item}))}\n             />\n           </Form.Item>\n\n           <Form.Item\n             {...restField}\n             name={[name, 'Normalizer']}\n           >\n             <Select\n               placeholder=\"Normalizer\"\n               options={selections.Normalizer.map(item => ({value: item}))}\n             />\n           </Form.Item>\n\n           <MinusCircleOutlined style={{color: 'white'}} onClick={() => remove(name)} />\n        </Space>\n    )\n}\n\nfunction SelectTask({selections, handleClick}) {\n  console.log(\"SelectTask recieve info:\" , selections);\n  return (\n    <Form\n      name=\"dynamic_form_nest_item\"\n      onFinish={handleClick}\n      style={{ width:\"100%\" }}\n      autoComplete=\"off\"\n    >\n      <Form.List name=\"Tasks\">\n        {(fields, { add, remove }) => (\n          <>\n            <div style={{ overflowY: 'auto', maxHeight: '200px' }}>\n            {fields.map(({ key, name, ...restField }) => (\n              <ASearch key={key} name={name} restField={restField} remove={remove} selections={selections} />\n            ))}\n            </div>\n            <Form.Item style={{marginTop:20}}>\n              <Button type=\"dashed\" onClick={() => add()} icon={<PlusOutlined />} style={{width:\"120px\"}}>\n                Add\n              </Button>\n            </Form.Item>\n            <Form.Item>\n              <Button type=\"primary\" htmlType=\"submit\" style={{width:\"120px\"}}>\n                Search\n              </Button>\n            </Form.Item>\n          </>\n        )}\n      </Form.List>\n    </Form>\n  )\n}\n\nexport default SelectTask;"
  },
  {
    "path": "webui/src/features/analytics/index.js",
    "content": "import React from \"react\";\n\nimport {\n  Row,\n  Col,\n} from \"reactstrap\";\n\nimport TitleCard from \"../../components/Cards/TitleCard\"\nimport LineChart from './components/LineChart'\n\n\nimport Box from \"./charts/Box\";\nimport Trajectory from \"./charts/Trajectory\";\nimport SelectTask from \"./components/SelectTask.js\"\nimport { Skeleton } from \"antd\";\n\nclass Analytics extends React.Component {\n  constructor(props) {\n    super(props);\n    this.state = {\n      isFirst: true,\n      selections: {},\n      BoxData: {},\n      TrajectoryData: [],\n    };\n  }\n\n  handleClick = (values) => {\n    console.log(\"Tasks:\", values.Tasks)\n    const messageToSend = values.Tasks.map(task => ({\n      TaskName: task.TaskName || '',\n      NumObjs: task.NumObjs || '',\n      NumVars: task.NumVars || '',\n      Fidelity: task.Fidelity || '',\n      Workload: task.Workload || '',\n      Seed: task.Seed || '',\n      Refiner: task.Refiner || '',\n      Sampler: task.Sampler || '',\n      Pretrain: task.Pretrain || '',\n      Model: task.Model || '',\n      ACF: task.ACF || '',\n      Normalizer: task.Normalizer || ''\n    }));\n    fetch('http://localhost:5001/api/comparison/choose_task', {\n      method: 'POST',\n      headers: {\n        'Content-Type': 'application/json',\n      },\n      body: JSON.stringify(messageToSend),\n    })\n    .then(response => {\n      if (!response.ok) {\n        throw new Error('Network response was not ok');\n      } \n      return response.json();\n    })\n    .then(data => {\n      // console.log('Message from back-end:', data);\n      this.setState({ BoxData: data.BoxData, TrajectoryData: data.TrajectoryData });\n    })\n    .catch((error) => {\n      console.error('Error sending message:', error);\n    });\n  }\n\n  render() {\n    if (this.state.isFirst) {\n      const messageToSend = {\n        message: 'ask for selections',\n      }\n      fetch('http://localhost:5001/api/comparison/selections', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(data => {\n        console.log('Message from back-end:', data);\n        this.setState({ selections: data , isFirst: false});\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n      });\n\n    } else {\n      return (\n        <div>\n            <div>\n                <TitleCard\n                  title={\n                    <h5>\n                      <span className=\"fw-semi-bold\">Filter</span>\n                    </h5>\n                  }\n                  collapse\n                >\n                  <SelectTask selections={this.state.selections} handleClick={this.handleClick}/>\n                </TitleCard>\n\n            <div className=\"grid mt-4 grid-cols-1 lg:grid-cols-[50%_50%] gap-6\">\n\n                <LineChart TrajectoryData={this.state.TrajectoryData} />\n\n\n                <TitleCard\n                    title={\n                    <h5>\n                        <span className=\"fw-semi-bold\">Box</span>\n                    </h5>\n                    }\n                    collapse\n                > \n                    <Box BoxData={this.state.BoxData}/>\n                </TitleCard>\n          </div>\n          </div>\n        </div>\n      );\n    }\n  }\n}\n\nexport default Analytics;\n"
  },
  {
    "path": "webui/src/features/calendar/CalendarEventsBodyRightDrawer.js",
    "content": "import { CALENDAR_EVENT_STYLE } from \"../../components/CalendarView/util\"\n\nconst THEME_BG = CALENDAR_EVENT_STYLE\n\nfunction CalendarEventsBodyRightDrawer({filteredEvents}){\n    return(\n        <>\n             {\n                filteredEvents.map((e, k) => {\n                    return <div key={k} className={`grid mt-3 card  rounded-box p-3 ${THEME_BG[e.theme] || \"\"}`}>\n                            {e.title}\n                        </div> \n                })\n            }\n        </>\n    )\n}\n\nexport default CalendarEventsBodyRightDrawer"
  },
  {
    "path": "webui/src/features/calendar/index.js",
    "content": "import { useState } from 'react'\nimport CalendarView from '../../components/CalendarView'\nimport moment from 'moment'\nimport { CALENDAR_INITIAL_EVENTS } from '../../utils/dummyData'\nimport { useDispatch } from 'react-redux'\nimport { openRightDrawer } from '../common/rightDrawerSlice'\nimport { RIGHT_DRAWER_TYPES } from '../../utils/globalConstantUtil'\nimport { showNotification } from '../common/headerSlice'\n\n\n\nconst INITIAL_EVENTS = CALENDAR_INITIAL_EVENTS\n\nfunction Calendar(){\n\n    const dispatch = useDispatch()\n\n    const [events, setEvents] = useState(INITIAL_EVENTS)\n\n    // Add your own Add Event handler, like opening modal or random event addition\n    // Format - {title :\"\", theme: \"\", startTime : \"\", endTime : \"\"}, typescript version comming soon :)\n    const addNewEvent = (date) => {\n        let randomEvent = INITIAL_EVENTS[Math.floor(Math.random() * 10)]\n        let newEventObj = {title : randomEvent.title, theme : randomEvent.theme, startTime : moment(date).startOf('day'), endTime : moment(date).endOf('day')}\n        setEvents([...events, newEventObj])\n        dispatch(showNotification({message : \"New Event Added!\", status : 1}))\n    }\n\n    // Open all events of current day in sidebar \n    const openDayDetail = ({filteredEvents, title}) => {\n        dispatch(openRightDrawer({header : title, bodyType : RIGHT_DRAWER_TYPES.CALENDAR_EVENTS, extraObject : {filteredEvents}}))\n    }\n\n    return(\n        <>\n           <CalendarView \n                calendarEvents={events}\n                addNewEvent={addNewEvent}\n                openDayDetail={openDayDetail}\n           />\n        </>\n    )\n}\n\nexport default Calendar"
  },
  {
    "path": "webui/src/features/charts/components/BarChart.js",
    "content": "import {\n  Chart as ChartJS,\n  CategoryScale,\n  LinearScale,\n  BarElement,\n  Title,\n  Tooltip,\n  Legend,\n} from 'chart.js';\nimport { Bar } from 'react-chartjs-2';\nimport TitleCard from '../../../components/Cards/TitleCard';\n\nChartJS.register(CategoryScale, LinearScale, BarElement, Title, Tooltip, Legend);\n\nfunction BarChart(){\n\n    const options = {\n        responsive: true,\n        plugins: {\n          legend: {\n            position: 'top',\n          }\n        },\n      };\n      \n      const labels = ['January', 'February', 'March', 'April', 'May', 'June', 'July'];\n      \n      const data = {\n        labels,\n        datasets: [\n          {\n            label: 'Store 1',\n            data: labels.map(() => { return Math.random() * 1000 + 500 }),\n            backgroundColor: 'rgba(255, 99, 132, 1)',\n          },\n          {\n            label: 'Store 2',\n            data: labels.map(() => { return Math.random() * 1000 + 500 }),\n            backgroundColor: 'rgba(53, 162, 235, 1)',\n          },\n        ],\n      };\n\n    return(\n      <TitleCard title={\"No of Orders\"} topMargin=\"mt-2\">\n            <Bar options={options} data={data} />\n      </TitleCard>\n\n    )\n}\n\n\nexport default BarChart"
  },
  {
    "path": "webui/src/features/charts/components/DoughnutChart.js",
    "content": "import {\n  Chart as ChartJS,\n  Filler,\n  ArcElement,\n  Title,\n  Tooltip,\n  Legend,\n} from 'chart.js';\nimport { Doughnut } from 'react-chartjs-2';\nimport TitleCard from '../../../components/Cards/TitleCard';\nimport Subtitle from '../../../components/Typography/Subtitle';\n\nChartJS.register(ArcElement, Tooltip, Legend,\n    Tooltip,\n    Filler,\n    Legend);\n\nfunction DoughnutChart(){\n\n    const options = {\n        responsive: true,\n        plugins: {\n          legend: {\n            position: 'top',\n          },\n        },\n      };\n      \n      const labels = ['Electronics', 'Home Applicances', 'Beauty', 'Furniture', 'Watches', 'Apparel'];\n      \n      const data = {\n        labels,\n        datasets: [\n            {\n                label: '# of Orders',\n                data: [122, 219, 30, 51, 82, 13],\n                backgroundColor: [\n                  'rgba(255, 99, 132, 0.8)',\n                  'rgba(54, 162, 235, 0.8)',\n                  'rgba(255, 206, 86, 0.8)',\n                  'rgba(75, 192, 192, 0.8)',\n                  'rgba(153, 102, 255, 0.8)',\n                  'rgba(255, 159, 64, 0.8)',\n                ],\n                borderColor: [\n                  'rgba(255, 99, 132, 1)',\n                  'rgba(54, 162, 235, 1)',\n                  'rgba(255, 206, 86, 1)',\n                  'rgba(75, 192, 192, 1)',\n                  'rgba(153, 102, 255, 1)',\n                  'rgba(255, 159, 64, 1)',\n                ],\n                borderWidth: 1,\n              }\n        ],\n      };\n\n    return(\n        <TitleCard title={\"Orders by Category\"}>\n                <Doughnut options={options} data={data} />\n        </TitleCard>\n    )\n}\n\n\nexport default DoughnutChart"
  },
  {
    "path": "webui/src/features/charts/components/LineChart.js",
    "content": "import {\n  Chart as ChartJS,\n  CategoryScale,\n  LinearScale,\n  PointElement,\n  LineElement,\n  Title,\n  Tooltip,\n  Filler,\n  Legend,\n} from 'chart.js';\nimport { Line } from 'react-chartjs-2';\nimport TitleCard from '../../../components/Cards/TitleCard';\n\nChartJS.register(\n  CategoryScale,\n  LinearScale,\n  PointElement,\n  LineElement,\n  Title,\n  Tooltip,\n  Filler,\n  Legend\n);\n\nfunction LineChart(){\n\n  const options = {\n    responsive: true,\n    plugins: {\n      legend: {\n        position: 'top',\n      },\n    },\n  };\n\n  \n  const labels = ['January', 'February', 'March', 'April', 'May', 'June', 'July'];\n\n  const data = {\n  labels,\n  datasets: [\n    {\n      fill: true,\n      label: 'MAU',\n      data: labels.map(() => { return Math.random() * 100 + 500 }),\n      borderColor: 'rgb(53, 162, 235)',\n      backgroundColor: 'rgba(53, 162, 235, 0.5)',\n    },\n  ],\n};\n  \n\n    return(\n      <TitleCard title={\"Montly Active Users (in k)\"} >\n          <Line data={data} options={options}/>\n      </TitleCard>\n    )\n}\n\n\nexport default LineChart"
  },
  {
    "path": "webui/src/features/charts/components/PieChart.js",
    "content": "import {\n    Chart as ChartJS,\n    Filler,\n    ArcElement,\n    Title,\n    Tooltip,\n    Legend,\n  } from 'chart.js';\n  import { Pie } from 'react-chartjs-2';\n  import TitleCard from '../../../components/Cards/TitleCard';\n  import Subtitle from '../../../components/Typography/Subtitle';\n  \n  ChartJS.register(ArcElement, Tooltip, Legend,\n      Tooltip,\n      Filler,\n      Legend);\n  \n  function PieChart(){\n  \n      const options = {\n          responsive: true,\n          plugins: {\n            legend: {\n              position: 'top',\n            },\n          },\n        };\n        \n        const labels = ['India', 'Middle East', 'Europe', 'US', 'Latin America', 'Asia(non-india)'];\n        \n        const data = {\n          labels,\n          datasets: [\n              {\n                  label: '# of Orders',\n                  data: [122, 219, 30, 51, 82, 13],\n                  backgroundColor: [\n                    'rgba(255, 99, 255, 0.8)',\n                    'rgba(54, 162, 235, 0.8)',\n                    'rgba(255, 206, 255, 0.8)',\n                    'rgba(75, 192, 255, 0.8)',\n                    'rgba(153, 102, 255, 0.8)',\n                    'rgba(255, 159, 255, 0.8)',\n                  ],\n                  borderColor: [\n                    'rgba(255, 99, 255, 1)',\n                    'rgba(54, 162, 235, 1)',\n                    'rgba(255, 206, 255, 1)',\n                    'rgba(75, 192, 255, 1)',\n                    'rgba(153, 102, 255, 1)',\n                    'rgba(255, 159, 255, 1)',\n                  ],\n                  borderWidth: 1,\n                }\n          ],\n        };\n  \n      return(\n          <TitleCard title={\"Orders by country\"}>\n                  <Pie options={options} data={data} />\n          </TitleCard>\n      )\n  }\n  \n  \n  export default PieChart"
  },
  {
    "path": "webui/src/features/charts/components/ScatterChart.js",
    "content": "import {\n    Chart as ChartJS,\n    Filler,\n    ArcElement,\n    Tooltip,\n    Legend,\n  } from 'chart.js';\n  import { Scatter } from 'react-chartjs-2';\n  import TitleCard from '../../../components/Cards/TitleCard';\n  \n  ChartJS.register(ArcElement, Tooltip, Legend,\n      Tooltip,\n      Filler,\n      Legend);\n  \n  function ScatterChart(){\n  \n      const options = {\n            scales: {\n                y: {\n                    beginAtZero: true,\n                },\n            },\n        };\n        \n        const data = {\n          datasets: [\n            {\n              label: 'Orders > 1k',\n              data: Array.from({ length: 100 }, () => ({\n                x: Math.random() * 11,\n                y: Math.random() * 31,\n              })),\n              backgroundColor: 'rgba(255, 99, 132, 1)',\n            },\n            {\n                label: 'Orders > 2K',\n                data: Array.from({ length: 100 }, () => ({\n                  x: Math.random() * 12,\n                  y: Math.random() * 12,\n                })),\n                backgroundColor: 'rgba(0, 0, 255, 1)',\n              },\n          ],\n        };\n  \n      return(\n          <TitleCard title={\"No of Orders by month (in k)\"}>\n                  <Scatter options={options} data={data} />\n          </TitleCard>\n      )\n  }\n  \n  \n  export default ScatterChart"
  },
  {
    "path": "webui/src/features/charts/components/StackBarChart.js",
    "content": "import {\n    Chart as ChartJS,\n    CategoryScale,\n    LinearScale,\n    BarElement,\n    Title,\n    Tooltip,\n    Legend,\n  } from 'chart.js';\n  import { Bar } from 'react-chartjs-2';\n  import TitleCard from '../../../components/Cards/TitleCard';\n  \n  ChartJS.register(CategoryScale, LinearScale, BarElement, Title, Tooltip, Legend);\n  \n  function StackBarChart(){\n  \n      const options = {\n            responsive: true,\n            scales: {\n                x: {\n                    stacked: true,\n                },\n                y: {\n                    stacked: true,\n                },\n            },\n        };\n        \n        const labels = ['January', 'February', 'March', 'April', 'May', 'June', 'July'];\n        \n        const data = {\n          labels,\n          datasets: [\n            {\n              label: 'Store 1',\n              data: labels.map(() => { return Math.random() * 1000 + 500 }),\n              backgroundColor: 'rgba(255, 99, 132, 1)',\n            },\n            {\n              label: 'Store 2',\n              data: labels.map(() => { return Math.random() * 1000 + 500 }),\n              backgroundColor: 'rgba(53, 162, 235, 1)',\n            },\n            {\n                label: 'Store 3',\n                data: labels.map(() => { return Math.random() * 1000 + 500 }),\n                backgroundColor: 'rgba(235, 162, 235, 1)',\n              },\n          ],\n        };\n  \n      return(\n        <TitleCard title={\"Sales\"} topMargin=\"mt-2\">\n              <Bar options={options} data={data} />\n        </TitleCard>\n  \n      )\n  }\n  \n  \n  export default StackBarChart"
  },
  {
    "path": "webui/src/features/charts/index.js",
    "content": "import LineChart from './components/LineChart'\nimport BarChart from './components/BarChart'\nimport DoughnutChart from './components/DoughnutChart'\nimport PieChart from './components/PieChart'\nimport ScatterChart from '../dashboard/components/ScatterChart'\nimport StackBarChart from './components/StackBarChart'\nimport Datepicker from \"react-tailwindcss-datepicker\"; \nimport { useState } from 'react'\n\n\n\n\nfunction Charts(){\n\n    const [dateValue, setDateValue] = useState({ \n        startDate: new Date(), \n        endDate: new Date() \n    }); \n    \n    const handleDatePickerValueChange = (newValue) => {\n        console.log(\"newValue:\", newValue); \n        setDateValue(newValue); \n    } \n\n    return(\n        <>\n        <Datepicker \n                containerClassName=\"w-72\" \n                value={dateValue} \n                theme={\"light\"}\n                inputClassName=\"input input-bordered w-72\" \n                popoverDirection={\"down\"}\n                toggleClassName=\"invisible\"\n                onChange={handleDatePickerValueChange} \n                showShortcuts={true} \n                primaryColor={\"white\"} \n            /> \n        {/** ---------------------- Different charts ------------------------- */}\n            <div className=\"grid lg:grid-cols-2 mt-0 grid-cols-1 gap-6\">\n                <StackBarChart />\n                <BarChart />\n            </div>\n\n        \n            <div className=\"grid lg:grid-cols-2 mt-4 grid-cols-1 gap-6\">\n                <DoughnutChart />\n                <PieChart />\n            </div>\n\n            <div className=\"grid lg:grid-cols-2 mt-4 grid-cols-1 gap-6\">\n                <ScatterChart />\n                <LineChart />\n            </div>\n        </>\n    )\n}\n\nexport default Charts"
  },
  {
    "path": "webui/src/features/chatbot/ChatBot.js",
    "content": "\nimport React from 'react';\nimport { Row, Col } from 'reactstrap';\nimport TitleCard from \"../../components/Cards/TitleCard\"\nimport ChatUI from './components/ChatUI'\n\nclass ChatBot extends React.Component {\n\n    render() {\n        return (\n          <div>\n          <div className=\"mt-4 w-[1400px] p-4 bg-gray-100\">\n\n                <TitleCard\n                  title={\n                    <h5>\n                      ChatOpt\n                    </h5>\n                  }\n                >\n                    <ChatUI />\n                </TitleCard>\n            </div>\n          </div>\n        );\n    }\n}\n\n\nexport default ChatBot"
  },
  {
    "path": "webui/src/features/chatbot/components/ChatUI.js",
    "content": "import React from 'react';\nimport Chat, { Bubble, useMessages } from '@chatui/core';\nimport '@chatui/core/dist/index.css';\nimport './chatui-theme.css';\n\nfunction ChatUI() {\n  const { messages, appendMsg, setTyping } = useMessages([]);\n\n  function handleSend(type, val) {\n    if (type === 'text' && val.trim()) {\n      appendMsg({\n        type: 'text',\n        content: { text: val },\n        position: 'right',\n        sender: 'you',\n        time: new Date().toLocaleTimeString(),\n      });\n\n      const messageToSend = {\n        type: 'text',\n        content: { text: val },\n      };\n\n      fetch('http://localhost:5001/api/generate-yaml', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n        .then((response) => {\n          if (!response.ok) {\n            throw new Error('Network response was not ok');\n          }\n          return response.json();\n        })\n        .then((data) => {\n          console.log('Message sent successfully:', data);\n          appendMsg({\n            type: 'text',\n            content: { text: data.message },\n            position: 'left',\n            sender: 'robot',\n            time: new Date().toLocaleTimeString(),\n          });\n        })\n        .catch((error) => {\n          console.error('Error sending message:', error);\n          appendMsg({\n            type: 'text',\n            content: { text: 'There was an error processing your message.' },\n            position: 'left',\n            sender: 'robot',\n            time: new Date().toLocaleTimeString(),\n          });\n        });\n    }\n  }\n\n  function renderMessageContent(msg) {\n    const { content, sender, position, time } = msg;\n    const isLeft = position === 'left';\n\n    // 头像路径\n    const robotAvatar = '/robot.png'; // 替换为机器人头像的路径\n\n    const bubbleStyle = {\n      backgroundColor: isLeft ? '#E0E0E0' : '#81C784', // 左边灰色，右边绿色\n      color: '#000',\n      padding: '10px',\n      borderRadius: '8px',\n    };\n\n\n    \n    return (\n      <div style={{ display: 'flex', alignItems: 'flex-start',justifyContent: isLeft ? 'flex-start' : 'flex-end', marginBottom: '10px' }}>\n        {isLeft && (\n          <img\n            src={robotAvatar}\n            alt=\"robot\"\n            style={{ width: '40px', height: '40px', borderRadius: '50%', marginRight: '10px' }}\n          />\n        )}\n        <div>\n          <div style={{ fontSize: '12px', color: '#555', marginBottom: '4px' }}>\n            {sender === 'robot' ? 'ChatOPT' : 'You'}\n          </div>\n          <Bubble style={bubbleStyle}>\n            <div style={{ display: 'flex', flexDirection: 'column' }}>\n              <span>{content.text}</span>\n              <span style={{ fontSize: '12px', color: '#999', marginTop: '5px', alignSelf: 'flex-end' }}>\n                {time}\n              </span>\n            </div>\n          </Bubble>\n        </div>\n      </div>\n    );\n  }\n\n\n\n\n\n  \n  return (\n    <Chat \n      messages={messages}\n      renderMessageContent={renderMessageContent}\n      onSend={handleSend}\n      placeholder='Send a message...'\n      locale='en-US'\n    />\n  );\n}\n\nexport default ChatUI;\n"
  },
  {
    "path": "webui/src/features/chatbot/components/chatui-theme.css",
    "content": ":root {\n    --brand-1: rgb(53, 162, 235);\n    --brand-2: rgb(53, 162, 235);\n    --brand-3: rgb(53, 162, 235);\n    --marked-color: #111;\n    font-size: 16px;\n    --btn-primary-color: #111;\n  }\n  .ChatApp,\n  .MessageContainer,\n  .Navbar,\n  .Message .Bubble,\n  .QuickReplies,\n  .ChatFooter {\n    background-repeat: no-repeat;\n    background-size: cover;\n    background-color: rgba(0, 0, 0, 0);\n  }\n  .ChatApp {\n    background-color: rgba(0, 0, 0, 0);\n  }\n  /* .Navbar {\n    background-color: rgba(0, 0, 0, 0);\n    border: 0;\n    box-shadow: none;\n  } */\n\n  /* .Navbar {\n    display: flex;\n    justify-content: space-between;\n    align-items: center;\n    padding: 10px 20px;\n    background-color: #000000;\n    color: #000000;\n  } */\n\n  .Navbar-title {\n    color: #8f2020;\n  }\n  .Message.left .Bubble {\n    background-color: rgb(53, 162, 235, 0.3);\n    color: #111;\n    font-size: 16px;\n    font-family: 'Arial', sans-serif; \n    font-weight: bold; \n    line-height: 1.5;\n  }\n  .Message.right .Bubble {\n    background-color: rgb(53, 162, 235, 0.3);\n    color: #111;\n    font-size: 16px;\n    font-family: 'Arial', sans-serif; \n    font-weight: bold; \n    line-height: 1.5;\n  }\n\n  "
  },
  {
    "path": "webui/src/features/common/components/ConfirmationModalBody.js",
    "content": "import {useDispatch, useSelector} from 'react-redux'\nimport axios from 'axios'\nimport { CONFIRMATION_MODAL_CLOSE_TYPES, MODAL_CLOSE_TYPES } from '../../../utils/globalConstantUtil'\nimport { deleteLead } from '../../leads/leadSlice'\nimport { showNotification } from '../headerSlice'\n\nfunction ConfirmationModalBody({ extraObject, closeModal}){\n\n    const dispatch = useDispatch()\n\n    const { message, type, _id, index} = extraObject\n\n\n    const proceedWithYes = async() => {\n        if(type === CONFIRMATION_MODAL_CLOSE_TYPES.LEAD_DELETE){\n            // positive response, call api or dispatch redux function\n            dispatch(deleteLead({index}))\n            dispatch(showNotification({message : \"Lead Deleted!\", status : 1}))\n        }\n        closeModal()\n    }\n\n    return(\n        <> \n        <p className=' text-xl mt-8 text-center'>\n            {message}\n        </p>\n\n        <div className=\"modal-action mt-12\">\n                \n                <button className=\"btn btn-outline   \" onClick={() => closeModal()}>Cancel</button>\n\n                <button className=\"btn btn-primary w-36\" onClick={() => proceedWithYes()}>Yes</button> \n\n        </div>\n        </>\n    )\n}\n\nexport default ConfirmationModalBody"
  },
  {
    "path": "webui/src/features/common/components/NotificationBodyRightDrawer.js",
    "content": "function NotificationBodyRightDrawer(){\n    return(\n        <>\n             {\n                [...Array(15)].map((_, i) => {\n                    return <div key={i} className={\"grid mt-3 card bg-base-200 rounded-box p-3\" + (i < 5 ? \" bg-blue-100\" : \"\")}>\n                            {i % 2 === 0 ? `Your sales has increased by 30% yesterday` : `Total likes for instagram post - New launch this week,  has crossed 100k `}\n                        </div> \n                })\n            }\n        </>\n    )\n}\n\nexport default NotificationBodyRightDrawer"
  },
  {
    "path": "webui/src/features/common/headerSlice.js",
    "content": "import { createSlice } from '@reduxjs/toolkit'\n\nexport const headerSlice = createSlice({\n    name: 'header',\n    initialState: {\n        pageTitle: \"Home\",  // current page title state management\n        noOfNotifications : 15,  // no of unread notifications\n        newNotificationMessage : \"\",  // message of notification to be shown\n        newNotificationStatus : 1,   // to check the notification type -  success/ error/ info\n    },\n    reducers: {\n        setPageTitle: (state, action) => {\n            state.pageTitle = action.payload.title\n        },\n\n\n        removeNotificationMessage: (state, action) => {\n            state.newNotificationMessage = \"\"\n        },\n\n        showNotification: (state, action) => {\n            state.newNotificationMessage = action.payload.message\n            state.newNotificationStatus = action.payload.status\n        },\n    }\n})\n\nexport const { setPageTitle, removeNotificationMessage, showNotification } = headerSlice.actions\n\nexport default headerSlice.reducer"
  },
  {
    "path": "webui/src/features/common/modalSlice.js",
    "content": "import { createSlice } from '@reduxjs/toolkit'\n\nexport const modalSlice = createSlice({\n    name: 'modal',\n    initialState: {\n        title: \"\",  // current  title state management\n        isOpen : false,   // modal state management for opening closing\n        bodyType : \"\",   // modal content management\n        size : \"\",   // modal content management\n        extraObject : {},   \n    },\n    reducers: {\n\n        openModal: (state, action) => {\n            const {title, bodyType, extraObject, size} = action.payload\n            state.isOpen = true\n            state.bodyType = bodyType\n            state.title = title\n            state.size = size || 'md'\n            state.extraObject = extraObject\n        },\n\n        closeModal: (state, action) => {\n            state.isOpen = false\n            state.bodyType = \"\"\n            state.title = \"\"\n            state.extraObject = {}\n        },\n\n    }\n})\n\nexport const { openModal, closeModal } = modalSlice.actions\n\nexport default modalSlice.reducer"
  },
  {
    "path": "webui/src/features/common/rightDrawerSlice.js",
    "content": "import { createSlice } from '@reduxjs/toolkit'\n\nexport const rightDrawerSlice = createSlice({\n    name: 'rightDrawer',\n    initialState: {\n        header: \"\",  // current  title state management\n        isOpen : false,   // right drawer state management for opening closing\n        bodyType : \"\",   // right drawer content management\n        extraObject : {},   \n    },\n    reducers: {\n\n        openRightDrawer: (state, action) => {\n            const {header, bodyType, extraObject} = action.payload\n            state.isOpen = true\n            state.bodyType = bodyType\n            state.header = header\n            state.extraObject = extraObject\n        },\n\n        closeRightDrawer: (state, action) => {\n            state.isOpen = false\n            state.bodyType = \"\"\n            state.header = \"\"\n            state.extraObject = {}\n        },\n\n    }\n})\n\nexport const { openRightDrawer, closeRightDrawer } = rightDrawerSlice.actions\n\nexport default rightDrawerSlice.reducer"
  },
  {
    "path": "webui/src/features/dashboard/components/AmountStats.js",
    "content": "\n\nfunction AmountStats({}){\n    return(\n        <div className=\"stats bg-base-100 shadow\">\n            <div className=\"stat\">\n                <div className=\"stat-title\">Amount to be Collected</div>\n                <div className=\"stat-value\">$25,600</div>\n                <div className=\"stat-actions\">\n                    <button className=\"btn btn-xs\">View Users</button> \n                </div>\n            </div>\n            \n            <div className=\"stat\">\n                <div className=\"stat-title\">Cash in hand</div>\n                <div className=\"stat-value\">$5,600</div>\n                <div className=\"stat-actions\">\n                    <button className=\"btn btn-xs\">View Members</button> \n                </div>\n            </div>\n        </div>\n    )\n}\n\nexport default AmountStats"
  },
  {
    "path": "webui/src/features/dashboard/components/BarChart.js",
    "content": "import {\n  Chart as ChartJS,\n  CategoryScale,\n  LinearScale,\n  BarElement,\n  Title,\n  Tooltip,\n  Legend,\n} from 'chart.js';\nimport { Bar } from 'react-chartjs-2';\nimport TitleCard from '../../../components/Cards/TitleCard';\n\nChartJS.register(CategoryScale, LinearScale, BarElement, Title, Tooltip, Legend);\n\nfunction BarChart({ ImportanceData }){\n\n    const options = {\n        responsive: true,\n        plugins: {\n          legend: {\n            position: 'top',\n          }\n        },\n      };\n      \n      const labels = ['x1', 'x2', 'x3', 'x4'];\n      \n      const data = {\n        labels,\n        datasets: [\n          {\n            label: 'Importance level',\n            data: labels.map(() => { return Math.random() * 0.1 + 0.7 }),\n            backgroundColor: 'rgba(255, 99, 132, 1)',\n          },\n        ],\n      };\n\n    return(\n      <TitleCard title={\"Importance of variables\"}>\n            <Bar options={options} data={data} />\n      </TitleCard>\n\n    )\n}\n\n\nexport default BarChart"
  },
  {
    "path": "webui/src/features/dashboard/components/DashboardStats.js",
    "content": "function DashboardStats({title, icon, value, description, colorIndex}){\n\n    const COLORS = [\"primary\", \"primary\"]\n\n    const getDescStyle = () => {\n        if(description.includes(\"↗︎\"))return \"font-bold text-green-700 dark:text-green-300\"\n        else if(description.includes(\"↙\"))return \"font-bold text-rose-500 dark:text-red-400\"\n        else return \"\"\n    }\n\n    return(\n        <div className=\"stats shadow\">\n            <div className=\"stat\">\n                <div className={`stat-figure dark:text-slate-300 text-${COLORS[colorIndex%2]}`}>{icon}</div>\n                <div className=\"stat-title dark:text-slate-300\">{title}</div>\n                <div className={`stat-value dark:text-slate-300 text-${COLORS[colorIndex%2]}`}>{value}</div>\n                <div className={\"stat-desc  \" + getDescStyle()}>{description}</div>\n            </div>\n        </div>\n    )\n}\n\nexport default DashboardStats"
  },
  {
    "path": "webui/src/features/dashboard/components/DashboardTopBar.js",
    "content": "import SelectBox from \"../../../components/Input/SelectBox\"\nimport ArrowDownTrayIcon  from '@heroicons/react/24/outline/ArrowDownTrayIcon'\nimport ShareIcon  from '@heroicons/react/24/outline/ShareIcon'\nimport EnvelopeIcon  from '@heroicons/react/24/outline/EnvelopeIcon'\nimport EllipsisVerticalIcon  from '@heroicons/react/24/outline/EllipsisVerticalIcon'\nimport ArrowPathIcon  from '@heroicons/react/24/outline/ArrowPathIcon'\nimport { useState } from \"react\"\nimport Datepicker from \"react-tailwindcss-datepicker\"; \n\n\n\nconst periodOptions = [\n    {name : \"Today\", value : \"TODAY\"},\n    {name : \"Yesterday\", value : \"YESTERDAY\"},\n    {name : \"This Week\", value : \"THIS_WEEK\"},\n    {name : \"Last Week\", value : \"LAST_WEEK\"},\n    {name : \"This Month\", value : \"THIS_MONTH\"},\n    {name : \"Last Month\", value : \"LAST_MONTH\"},\n]\n\nfunction DashboardTopBar({updateDashboardPeriod}){\n\n        const [dateValue, setDateValue] = useState({ \n            startDate: new Date(), \n            endDate: new Date() \n        }); \n        \n        const handleDatePickerValueChange = (newValue) => {\n            console.log(\"newValue:\", newValue); \n            setDateValue(newValue); \n            updateDashboardPeriod(newValue)\n        } \n\n\n    return(\n        <div className=\"grid grid-cols-1 sm:grid-cols-2 gap-4\">\n            <div className=\"\">\n            <Datepicker \n                containerClassName=\"w-72 \" \n                value={dateValue} \n                theme={\"light\"}\n                inputClassName=\"input input-bordered w-72\" \n                popoverDirection={\"down\"}\n                toggleClassName=\"invisible\"\n                onChange={handleDatePickerValueChange} \n                showShortcuts={true} \n                primaryColor={\"white\"} \n            /> \n            {/* <SelectBox \n                options={periodOptions}\n                labelTitle=\"Period\"\n                placeholder=\"Select date range\"\n                containerStyle=\"w-72\"\n                labelStyle=\"hidden\"\n                defaultValue=\"TODAY\"\n                updateFormValue={updateSelectBoxValue}\n            /> */}\n            </div>\n            <div className=\"text-right \">\n                <button className=\"btn btn-ghost btn-sm normal-case\"><ArrowPathIcon className=\"w-4 mr-2\"/>Refresh Data</button>\n                <button className=\"btn btn-ghost btn-sm normal-case  ml-2\"><ShareIcon className=\"w-4 mr-2\"/>Share</button>\n\n                <div className=\"dropdown dropdown-bottom dropdown-end  ml-2\">\n                    <label tabIndex={0} className=\"btn btn-ghost btn-sm normal-case btn-square \"><EllipsisVerticalIcon className=\"w-5\"/></label>\n                    <ul tabIndex={0} className=\"dropdown-content menu menu-compact  p-2 shadow bg-base-100 rounded-box w-52\">\n                        <li><a><EnvelopeIcon className=\"w-4\"/>Email Digests</a></li>\n                        <li><a><ArrowDownTrayIcon className=\"w-4\"/>Download</a></li>\n                    </ul>\n                </div>\n            </div>\n        </div>\n    )\n}\n\nexport default DashboardTopBar"
  },
  {
    "path": "webui/src/features/dashboard/components/DoughnutChart.js",
    "content": "import {\n  Chart as ChartJS,\n  Filler,\n  ArcElement,\n  Title,\n  Tooltip,\n  Legend,\n} from 'chart.js';\nimport { Doughnut } from 'react-chartjs-2';\nimport TitleCard from '../../../components/Cards/TitleCard';\nimport Subtitle from '../../../components/Typography/Subtitle';\n\nChartJS.register(ArcElement, Tooltip, Legend,\n    Tooltip,\n    Filler,\n    Legend);\n\nfunction DoughnutChart(){\n\n    const options = {\n        responsive: true,\n        plugins: {\n          legend: {\n            position: 'top',\n          },\n        },\n      };\n      \n      const labels = ['Electronics', 'Home Applicances', 'Beauty', 'Furniture', 'Watches', 'Apparel'];\n      \n      const data = {\n        labels,\n        datasets: [\n            {\n                label: '# of Orders',\n                data: [122, 219, 30, 51, 82, 13],\n                backgroundColor: [\n                  'rgba(255, 99, 132, 0.8)',\n                  'rgba(54, 162, 235, 0.8)',\n                  'rgba(255, 206, 86, 0.8)',\n                  'rgba(75, 192, 192, 0.8)',\n                  'rgba(153, 102, 255, 0.8)',\n                  'rgba(255, 159, 64, 0.8)',\n                ],\n                borderColor: [\n                  'rgba(255, 99, 132, 1)',\n                  'rgba(54, 162, 235, 1)',\n                  'rgba(255, 206, 86, 1)',\n                  'rgba(75, 192, 192, 1)',\n                  'rgba(153, 102, 255, 1)',\n                  'rgba(255, 159, 64, 1)',\n                ],\n                borderWidth: 1,\n              }\n        ],\n      };\n\n    return(\n        <TitleCard title={\"Orders by Category\"}>\n                <Doughnut options={options} data={data} />\n        </TitleCard>\n    )\n}\n\n\nexport default DoughnutChart"
  },
  {
    "path": "webui/src/features/dashboard/components/Footprint.js",
    "content": "import React from 'react';\nimport { Scatter } from 'react-chartjs-2';\nimport {\n  Chart as ChartJS,\n  ScatterController,\n  PointElement,\n  LinearScale,\n  Title,\n  Tooltip,\n  Legend,\n} from 'chart.js';\n\n// 注册必要的组件\nChartJS.register(ScatterController, PointElement, LinearScale, Title, Tooltip, Legend);\n\nfunction Footprint({ ScatterData = {} }) { // 提供默认值为空对象\n  // 检查 ScatterData 是否为对象\n  if (!ScatterData || typeof ScatterData !== 'object') {\n    console.error('ScatterData is not a valid object:', ScatterData);\n    return null; // 处理无效数据时返回 null 以避免进一步错误\n  }\n\n  // 转换数据为 Chart.js 可识别的格式\n  const datasets = Object.keys(ScatterData).map((key, index) => ({\n    label: key, // 数据集名称\n    data: ScatterData[key].map(point => ({ x: point[0], y: point[1] })), // 将数据转换为 {x, y} 格式\n    backgroundColor: 'rgba(75, 192, 192, 1)', // 设置散点的颜色，可以根据需要调整\n    borderColor: 'rgba(75, 192, 192, 1)',\n    pointRadius: 5, // 散点的大小\n  }));\n\n  // 定义图表选项\n  const options = {\n    plugins: {\n      legend: {\n        labels: {\n          color: '#ffffff', // 设置图例文本颜色\n        },\n      },\n      tooltip: {\n        enabled: true, // 启用工具提示\n      },\n    },\n    scales: {\n      x: {\n        type: 'linear', // X轴类型为线性\n        position: 'bottom',\n        ticks: {\n          color: '#ffffff', // X轴标签颜色\n        },\n      },\n      y: {\n        type: 'linear', // Y轴类型为线性\n        ticks: {\n          color: '#ffffff', // Y轴标签颜色\n        },\n      },\n    },\n  };\n\n  // 图表数据\n  const data = {\n    datasets, // 使用转换后的数据集\n  };\n\n  return <Scatter data={data} options={options} />;\n}\n\nexport default Footprint;\n"
  },
  {
    "path": "webui/src/features/dashboard/components/Importance.js",
    "content": "import React, {useState, useEffect} from 'react';\n\nfunction Importance() {  \n  const [imageUrl, setImageUrl] = useState(require('../../../exp_pictures/parameter_network.png'));\n\n  useEffect(() => {\n    // 在组件加载时自动更换图片\n    setImageUrl(require('../../../exp_pictures/parameter_network.png'));\n  }, []);\n\n  return <img src={imageUrl + '?' + new Date().getTime()} alt=\"network\" style={{ width: 'auto', height: 'auto', maxWidth: '100%', maxHeight: '100%' }} /> \n};\n\nexport default Importance;"
  },
  {
    "path": "webui/src/features/dashboard/components/LineChart.js",
    "content": "import React from 'react';\nimport { Line } from 'react-chartjs-2';\nimport TitleCard from '../../../components/Cards/TitleCard';\n\nimport {\n  Chart as ChartJS,\n  LineElement,\n  PointElement,\n  LinearScale,\n  Title,\n  CategoryScale,\n  Tooltip,\n  Legend,\n  Filler,\n} from 'chart.js';\n\n// 注册图表组件\nChartJS.register(LineElement, PointElement, LinearScale, CategoryScale, Title, Tooltip, Legend, Filler);\n\nconst color = [\n  \"#2ec7c9\", \"#b6a2de\", \"#5ab1ef\", \"#ffb980\", \"#d87a80\",\n  \"#8d98b3\", \"#e5cf0d\", \"#97b552\", \"#95706d\", \"#dc69aa\",\n  \"#07a2a4\", \"#9a7fd1\", \"#588dd5\", \"#f5994e\", \"#c05050\",\n  \"#59678c\", \"#c9ab00\", \"#7eb00a\", \"#6f5553\", \"#c14089\"\n];\n\nconst Trajectory = ({ TrajectoryData }) => {\n  // 默认值处理，防止 undefined 错误\n  const data = {\n    datasets: TrajectoryData ? TrajectoryData.flatMap((item, index) => [\n      // 线图数据集\n      {\n        label: `${item.name}`, // 数据集名称\n        data: item.average.map(point => ({ x: point.FEs, y: point.y })), // 将数据点映射为 {x, y}\n        borderColor: color[index % color.length], // 使用颜色数组中的颜色\n        backgroundColor: color[index % color.length], // 线条颜色\n        tension: 0.4, // 曲线平滑度\n        fill: false, // 不填充区域\n      },\n      // 不确定性区域数据集\n      {\n        label: `${item.name} - uncertainty`, // 数据集名称\n        data: item.uncertainty.map(point => ({ x: point.FEs, y: point.y })), // 不确定性区域数据\n        borderColor: color[index % color.length], // 边框颜色\n        backgroundColor: `${color[index % color.length]}33`, // 透明背景色表示不确定性\n        tension: 0.4, // 曲线平滑度\n        fill: true, // 填充区域\n      },\n    ]) : [],\n  };\n\n  const options = {\n    scales: {\n      x: {\n        type: 'linear', // X轴类型为线性\n        title: {\n          display: true,\n          text: 'FEs', // X轴标题\n        },\n      },\n      y: {\n        title: {\n          display: true,\n          text: 'Y', // Y轴标题\n        },\n        beginAtZero: true,\n        nice: true,\n        sync: true,\n      },\n    },\n    plugins: {\n      tooltip: {\n        mode: 'index', // 工具提示框\n        intersect: false,\n        shared: true,\n      },\n      legend: {\n        display: true,\n        position: 'top',\n        labels: {\n          filter: (legendItem) => !legendItem.text.includes('uncertainty'), // 过滤掉包含 \"uncertainty\" 的标签\n        },\n      },\n    },\n  };\n\n  return (\n    <TitleCard title={\"Convergence Trajectory\"}>\n      <Line data={data} options={options}/>\n    </TitleCard>\n    // <div style={{ width: '100%', height: '400px' }}>\n    //   <Line data={data} options={options} />\n    // </div>\n  );\n};\n\nexport default Trajectory;\n"
  },
  {
    "path": "webui/src/features/dashboard/components/PageStats.js",
    "content": "import HeartIcon  from '@heroicons/react/24/outline/HeartIcon'\nimport BoltIcon  from '@heroicons/react/24/outline/BoltIcon'\n\n\nfunction PageStats({}){\n    return(\n        <div className=\"stats bg-base-100 shadow\">\n  \n  <div className=\"stat\">\n    <div className=\"stat-figure invisible md:visible\">\n        <HeartIcon className='w-8 h-8'/>\n    </div>\n    <div className=\"stat-title\">Total Likes</div>\n    <div className=\"stat-value\">25.6K</div>\n    <div className=\"stat-desc\">21% more than last month</div>\n  </div>\n  \n  <div className=\"stat\">\n    <div className=\"stat-figure invisible md:visible\">\n        <BoltIcon className='w-8 h-8'/>\n    </div>\n    <div className=\"stat-title\">Page Views</div>\n    <div className=\"stat-value\">2.6M</div>\n    <div className=\"stat-desc\">14% more than last month</div>\n  </div>\n</div>\n    )\n}\n\nexport default PageStats"
  },
  {
    "path": "webui/src/features/dashboard/components/ScatterChart.js",
    "content": "import React from 'react';\nimport { Scatter } from 'react-chartjs-2';\nimport {\n  Chart as ChartJS,\n  ScatterController,\n  PointElement,\n  LinearScale,\n  Title,\n  Tooltip,\n  Legend,\n  Filler,\n} from 'chart.js';\nimport TitleCard from '../../../components/Cards/TitleCard';\n\n// 注册必要的组件\nChartJS.register(\n  ScatterController,\n  PointElement,\n  LinearScale,\n  Title,\n  Tooltip,\n  Legend,\n  Filler\n);\n\n\n\nconst colors = [\n  'rgba(255, 99, 132, 0.5)', // 红色\n  'rgba(54, 162, 235, 0.5)', // 蓝色\n  'rgba(255, 206, 86, 0.5)', // 黄色\n];\n\n// 转换数据为 Chart.js 可识别的格式\n\n\n\nfunction Footprint({ ScatterData = {} }) {\n  // 检查 ScatterData 是否为对象\n  if (!ScatterData || typeof ScatterData !== 'object') {\n    console.error('ScatterData is not a valid object:', ScatterData);\n    return null; // 处理无效数据时返回 null 以避免进一步错误\n  }\n\n  const datasets = Object.keys(ScatterData).map((key, index) => ({\n    label: key, // 数据集名称\n    data: ScatterData[key].map(point => ({ x: point[0], y: point[1] })), // 将数据转换为 {x, y} 格式\n    backgroundColor: colors[index % colors.length], // 设置不同的数据集颜色\n    borderColor: colors[index % colors.length],\n    pointRadius: 5, // 散点的大小\n  }));\n\n  // 定义图表选项\n  const options = {\n    scales: {\n      x: {\n        type: 'linear', // X轴类型为线性\n      },\n      y: {\n        type: 'linear', // Y轴类型为线性\n      },\n    },\n    plugins: {\n\n      tooltip: {\n        enabled: true, // 启用工具提示\n      },\n    },\n  };\n\n  // 图表数据\n  const data = {\n    datasets, // 使用转换后的数据集\n  };\n\n  return (\n    <TitleCard title={\"Footprint\"}>\n        <Scatter data={data} options={options}/>\n    </TitleCard>\n  );\n}\n\nexport default Footprint;\n"
  },
  {
    "path": "webui/src/features/dashboard/components/Trajectory.js",
    "content": "import React from 'react';\nimport { Chart, Area, Line, Tooltip, View } from 'bizcharts';\n\nconst scale = {\n  y: { \n    sync: true,\n    nice: true,\n  },\n  FEs: {\n    type: 'linear',\n    nice: true,\n  },\n};\n\nconst color = [\n  \"#2ec7c9\", \"#b6a2de\", \"#5ab1ef\", \"#ffb980\", \"#d87a80\", \"#8d98b3\", \n  \"#e5cf0d\", \"#97b552\", \"#95706d\", \"#dc69aa\", \"#07a2a4\", \"#9a7fd1\", \n  \"#588dd5\", \"#f5994e\", \"#c05050\", \"#59678c\", \"#c9ab00\", \"#7eb00a\", \n  \"#6f5553\", \"#c14089\"\n];\n\nconst Trajectory = ({ TrajectoryData }) => {\n  return (\n    <Chart id=\"chart\" scale={scale} height={400} autoFit>\n      <Tooltip shared />\n      {TrajectoryData.map((item, index) => (\n        <React.Fragment key={index}>\n          <View data={item.average} scale={{ y: { alias: `${item.name}` } }}>\n            <Line position=\"FEs*y\" color={color[index]} />\n          </View>\n          <View data={item.uncertainty} scale={{ y: { alias: `${item.name}-uncertainty` } }}>\n            <Area position=\"FEs*y\" color={color[index]} shape=\"smooth\" />\n          </View>\n        </React.Fragment>\n      ))}\n    </Chart>\n  );\n};\n\nexport default Trajectory;\n"
  },
  {
    "path": "webui/src/features/dashboard/components/UserChannels.js",
    "content": "import TitleCard from \"../../../components/Cards/TitleCard\"\n\nconst userSourceData = [\n    {source : \"Facebook Ads\", count : \"26,345\", conversionPercent : 10.2},\n    {source : \"Google Ads\", count : \"21,341\", conversionPercent : 11.7},\n    {source : \"Instagram Ads\", count : \"34,379\", conversionPercent : 12.4},\n    {source : \"Affiliates\", count : \"12,359\", conversionPercent : 20.9},\n    {source : \"Organic\", count : \"10,345\", conversionPercent : 10.3},\n]\n\nfunction UserChannels(){\n    return(\n        <TitleCard title={\"User Signup Source\"}>\n             {/** Table Data */}\n             <div className=\"overflow-x-auto\">\n                <table className=\"table w-full\">\n                    <thead>\n                    <tr>\n                        <th></th>\n                        <th className=\"normal-case\">Source</th>\n                        <th className=\"normal-case\">No of Users</th>\n                        <th className=\"normal-case\">Conversion</th>\n                    </tr>\n                    </thead>\n                    <tbody>\n                        {\n                            userSourceData.map((u, k) => {\n                                return(\n                                    <tr key={k}>\n                                        <th>{k+1}</th>\n                                        <td>{u.source}</td>\n                                        <td>{u.count}</td>\n                                        <td>{`${u.conversionPercent}%`}</td>\n                                    </tr>\n                                )\n                            })\n                        }\n                    </tbody>\n                </table>\n            </div>\n        </TitleCard>\n    )\n}\n\nexport default UserChannels"
  },
  {
    "path": "webui/src/features/dashboard/components/my_theme.json",
    "content": "{\n    \"version\": 1,\n    \"themeName\": \"macarons\",\n    \"theme\": {\n        \"seriesCnt\": \"4\",\n        \"titleColor\": \"#ffffff\",\n        \"subtitleColor\": \"rgba(170,170,170,0.92)\",\n        \"textColorShow\": false,\n        \"textColor\": \"#333\",\n        \"markTextColor\": \"#ffffff\",\n        \"color\": [\n            \"#2ec7c9\",\n            \"#b6a2de\",\n            \"#5ab1ef\",\n            \"#ffb980\",\n            \"#d87a80\",\n            \"#8d98b3\",\n            \"#e5cf0d\",\n            \"#97b552\",\n            \"#95706d\",\n            \"#dc69aa\",\n            \"#07a2a4\",\n            \"#9a7fd1\",\n            \"#588dd5\",\n            \"#f5994e\",\n            \"#c05050\",\n            \"#59678c\",\n            \"#c9ab00\",\n            \"#7eb00a\",\n            \"#6f5553\",\n            \"#c14089\"\n        ],\n        \"borderColor\": \"#ffffff\",\n        \"borderWidth\": \"0\",\n        \"visualMapColor\": [\n            \"#5ab1ef\",\n            \"#e0ffff\"\n        ],\n        \"legendTextColor\": \"#ffffff\",\n        \"kColor\": \"#d87a80\",\n        \"kColor0\": \"#2ec7c9\",\n        \"kBorderColor\": \"#d87a80\",\n        \"kBorderColor0\": \"#2ec7c9\",\n        \"kBorderWidth\": 1,\n        \"lineWidth\": 2,\n        \"symbolSize\": 3,\n        \"symbol\": \"emptyCircle\",\n        \"symbolBorderWidth\": 1,\n        \"lineSmooth\": true,\n        \"graphLineWidth\": 1,\n        \"graphLineColor\": \"#aaaaaa\",\n        \"mapLabelColor\": \"#d87a80\",\n        \"mapLabelColorE\": \"rgb(100,0,0)\",\n        \"mapBorderColor\": \"#eeeeee\",\n        \"mapBorderColorE\": \"#444\",\n        \"mapBorderWidth\": 0.5,\n        \"mapBorderWidthE\": 1,\n        \"mapAreaColor\": \"#dddddd\",\n        \"mapAreaColorE\": \"rgba(254,153,78,1)\",\n        \"axes\": [\n            {\n                \"type\": \"all\",\n                \"name\": \"通用坐标轴\",\n                \"axisLineShow\": true,\n                \"axisLineColor\": \"#eeeeee\",\n                \"axisTickShow\": true,\n                \"axisTickColor\": \"#eeeeee\",\n                \"axisLabelShow\": true,\n                \"axisLabelColor\": \"#eeeeee\",\n                \"splitLineShow\": true,\n                \"splitLineColor\": [\n                    \"#aaaaaa\"\n                ],\n                \"splitAreaShow\": false,\n                \"splitAreaColor\": [\n                    \"#eeeeee\"\n                ]\n            },\n            {\n                \"type\": \"category\",\n                \"name\": \"类目坐标轴\",\n                \"axisLineShow\": true,\n                \"axisLineColor\": \"#ffffff\",\n                \"axisTickShow\": true,\n                \"axisTickColor\": \"#ffffff\",\n                \"axisLabelShow\": true,\n                \"axisLabelColor\": \"#ffffff\",\n                \"splitLineShow\": false,\n                \"splitLineColor\": [\n                    \"#ffffff\"\n                ],\n                \"splitAreaShow\": false,\n                \"splitAreaColor\": [\n                    \"rgba(250,250,250,0.3)\",\n                    \"rgba(200,200,200,0.3)\"\n                ]\n            },\n            {\n                \"type\": \"value\",\n                \"name\": \"数值坐标轴\",\n                \"axisLineShow\": true,\n                \"axisLineColor\": \"#ffffff\",\n                \"axisTickShow\": true,\n                \"axisTickColor\": \"#ffffff\",\n                \"axisLabelShow\": true,\n                \"axisLabelColor\": \"#ffffff\",\n                \"splitLineShow\": true,\n                \"splitLineColor\": [\n                    \"#ffffff\"\n                ],\n                \"splitAreaShow\": false,\n                \"splitAreaColor\": [\n                    \"rgba(250,250,250,0.3)\",\n                    \"rgba(200,200,200,0.3)\"\n                ]\n            },\n            {\n                \"type\": \"log\",\n                \"name\": \"对数坐标轴\",\n                \"axisLineShow\": true,\n                \"axisLineColor\": \"#ffffff\",\n                \"axisTickShow\": true,\n                \"axisTickColor\": \"#ffffff\",\n                \"axisLabelShow\": true,\n                \"axisLabelColor\": \"#ffffff\",\n                \"splitLineShow\": true,\n                \"splitLineColor\": [\n                    \"#ffffff\"\n                ],\n                \"splitAreaShow\": false,\n                \"splitAreaColor\": [\n                    \"rgba(250,250,250,0.3)\",\n                    \"rgba(200,200,200,0.3)\"\n                ]\n            },\n            {\n                \"type\": \"time\",\n                \"name\": \"时间坐标轴\",\n                \"axisLineShow\": true,\n                \"axisLineColor\": \"#ffffff\",\n                \"axisTickShow\": true,\n                \"axisTickColor\": \"#ffffff\",\n                \"axisLabelShow\": true,\n                \"axisLabelColor\": \"#ffffff\",\n                \"splitLineShow\": true,\n                \"splitLineColor\": [\n                    \"#ffffff\"\n                ],\n                \"splitAreaShow\": false,\n                \"splitAreaColor\": [\n                    \"rgba(250,250,250,0.3)\",\n                    \"rgba(200,200,200,0.3)\"\n                ]\n            }\n        ],\n        \"axisSeperateSetting\": true,\n        \"toolboxColor\": \"#ffffff\",\n        \"toolboxEmphasisColor\": \"#18a4a6\",\n        \"tooltipAxisColor\": \"#18a4a6\",\n        \"tooltipAxisWidth\": \"2\",\n        \"timelineLineColor\": \"#008acd\",\n        \"timelineLineWidth\": 1,\n        \"timelineItemColor\": \"#ffffff\",\n        \"timelineItemColorE\": \"#006cdd\",\n        \"timelineCheckColor\": \"#2ec7c9\",\n        \"timelineCheckBorderColor\": \"#2ec7c9\",\n        \"timelineItemBorderWidth\": 1,\n        \"timelineControlColor\": \"#008acd\",\n        \"timelineControlBorderColor\": \"#008acd\",\n        \"timelineControlBorderWidth\": 0.5,\n        \"timelineLabelColor\": \"#008acd\",\n        \"datazoomBackgroundColor\": \"rgba(47,69,84,0)\",\n        \"datazoomDataColor\": \"#efefff\",\n        \"datazoomFillColor\": \"rgba(182,162,222,0.2)\",\n        \"datazoomHandleColor\": \"#008acd\",\n        \"datazoomHandleWidth\": \"100\",\n        \"datazoomLabelColor\": \"#333333\"\n    }\n}"
  },
  {
    "path": "webui/src/features/dashboard/index.js",
    "content": "import React from \"react\";\n\nimport { Row, Col, Button } from \"reactstrap\";\nimport {Input, Modal } from \"antd\";  \nimport TitleCard from \"../../components/Cards/TitleCard\"\n\n\nimport LineChart from './components/LineChart'\nimport BarChart from './components/BarChart'\n// import Footprint from \"./components/Footprint\";\nimport Footprint from \"./components/ScatterChart\";\n\n\n\nclass Dashboard extends React.Component {\n  constructor(props) {\n    super(props);\n    this.state = {\n      selectedTaskIndex: -1,\n      tasksInfo: [],\n      ScatterData: [],\n      TrajectoryData: [],\n      isModalVisible: false,  // 控制Modal显示\n      errorMessage: \"\"  // 存储输入的错误信息\n    };\n  }\n\n  // Select the corresponding task to display\n  handleTaskClick = (index) => {\n    console.log(index)\n    this.setState({ selectedTaskIndex: index });\n    const messageToSend = {\n      taskname:this.state.tasksInfo[this.state.selectedTaskIndex].problem_name,\n    }\n    fetch('http://localhost:5001/api/Dashboard/charts', {\n      method: 'POST',\n      headers: {\n        'Content-Type': 'application/json',\n      },\n      body: JSON.stringify(messageToSend),\n    })\n    .then(response => {\n      if (!response.ok) {\n        throw new Error('Network response was not ok');\n      } \n      return response.json();\n    })\n    .then(data => {\n      // console.log('Message from back-end:', data);\n      this.setState({\n        // BarData: data.BarData,\n        // RadarData: data.RadarData,\n        ScatterData: data.ScatterData,\n        TrajectoryData: data.TrajectoryData\n      })\n    })\n    .catch((error) => {\n      console.error('Error sending message:', error);\n    });\n  }\n\n  // 处理按钮点击事件\n  showModal = () => {\n    this.setState({ isModalVisible: true });\n  };\n\n  handleOk = () => {\n    console.log(this.state.errorMessage);\n    this.setState({ isModalVisible: false });\n\n    const messageToSend = {\n    errorMessage: this.state.errorMessage\n  };\n\n  fetch(\"http://localhost:5001/api/Dashboard/errorsubmit\", {  // 根据实际的API端点进行调整\n    method: \"POST\",\n    headers: {\n      \"Content-Type\": \"application/json\",\n    },\n    body: JSON.stringify(messageToSend),\n  })\n    .then((response) => {\n      if (!response.ok) {\n        throw new Error(\"Network response was not ok\");\n      }\n      return response.json();\n    })\n    .then((data) => {\n      console.log(\"Message sent successfully:\", data);\n      this.setState({ isModalVisible: false, errorMessage: \"\" });\n    })\n    .catch((error) => {\n      console.error(\"Error sending message:\", error);\n    });\n};\n\n\n  \n\n\n  handleCancel = () => {\n    this.setState({ isModalVisible: false });\n  };\n\n  handleInputChange = (e) => {\n    this.setState({ errorMessage: e.target.value });\n  };\n\n  componentDidMount() {\n    // 开始定时调用 fetchData 函数\n    this.intervalId = setInterval(this.fetchData, 1000000);\n  }\n\n  componentWillUnmount() {\n    // 清除定时器，以防止内存泄漏\n    clearInterval(this.intervalId);\n  }\n\n  fetchData = async () => {\n    try {\n      const messageToSend = {\n        taskname:this.state.tasksInfo[this.state.selectedTaskIndex].problem_name,\n      }\n      const response = await fetch('http://localhost:5001/api/Dashboard/trajectory', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend)\n      });\n      if (!response.ok) {\n        throw new Error('Network response was not ok');\n      }\n      const data = await response.json();\n      console.log('Data from server:', data);\n      // 在这里处理从服务器获取的数据\n      this.setState({\n        // BarData: data.BarData,\n        // RadarData: data.RadarData,\n        ScatterData: data.ScatterData,\n        TrajectoryData: data.TrajectoryData\n      })\n      // console.log('State:', this.state.BarData)\n    } catch (error) {\n      console.error('Error fetching data:', error);\n    }\n  };\n\n  render() { \n    // If first time rendering, then render the default task\n    // If not, then render the task that was clicked\n    if (this.state.selectedTaskIndex === -1) {\n      // TODO: ask for task list from back-end\n      const messageToSend = {\n        action: 'ask for tasks information',\n      }\n      fetch('http://localhost:5001/api/Dashboard/tasks', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(data => {\n        console.log('Message from back-end:', data);\n        this.setState({ selectedTaskIndex: 0,  tasksInfo: data });\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n      });\n\n      \n      // Set the default task as the first task in the list\n      return (\n        <div>\n          <h1 className=\"page-title\">\n            Dashboard - <span className=\"fw-semi-bold\">Tasks</span>\n          </h1>\n        </div>\n      )\n    } \n    else {\n\n        return (\n            <>\n            <TitleCard\n                title={\n                    <h5>\n                    <span className=\"fw-semi-bold\">Choose Dataset</span>\n                    </h5>\n                    }\n                    collapse\n            >\n\n                <div className=\"grid mt-4 grid-cols-1 lg:grid-cols-[20%_80%] gap-6\">\n\n                <div style={{ overflowY: 'auto', maxHeight: \"400px\" }}>\n                    {this.state.tasksInfo.map((task, index) => (\n                        <Button\n                        key={index}\n                        onClick={() => this.handleTaskClick(index)}\n                        style={{ backgroundColor: 'rgba(53, 162, 235, 0.5)', color: '#000000' }} // 自定义背景色和文字颜色\n\n                        >\n                        {task.problem_name}\n                        \n                        </Button>\n                    ))}\n                </div>\n                <div style={{ overflowY: 'auto', maxHeight: '400px', padding: '10px', border: '1px solid #ddd', borderRadius: '8px', backgroundColor: '#f9f9f9' }}>\n                \n                <section style={{ marginBottom: '20px', borderBottom: '1px solid #e0e0e0', paddingBottom: '10px' }}>\n                    <h4 style={{ color: '#333', marginBottom: '10px', fontSize: '1.2em', fontWeight: 'bold' }}>\n                        Problem Information\n                    </h4>\n                    <ul style={{ listStyle: 'none', padding: 0, lineHeight: '1.6' }}>\n                        <li style={{ marginBottom: '8px', fontSize: '0.95em' }}>\n                        <strong>Problem Name:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].problem_name}\n                        </li>\n                        <li style={{ marginBottom: '8px', fontSize: '0.95em' }}>\n                        <strong>Variable num:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].dim},&nbsp;\n                        <strong>Objective num:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].obj},&nbsp;\n                        <strong>Seeds:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].seeds},&nbsp;\n                        <strong>Budget type:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].budget_type},&nbsp;\n                        <strong>Budget:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].budget},&nbsp;\n                        <strong>Workloads:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].workloads},&nbsp;\n                        <strong>Fidelity:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].fidelity}\n                        </li>\n                    </ul>\n                </section>\n\n\n                <section style={{ marginBottom: '20px', borderBottom: '1px solid #e0e0e0', paddingBottom: '10px' }}>\n                    <h4 style={{ color: '#333', marginBottom: '10px', fontSize: '1.2em', fontWeight: 'bold' }}>\n                        Algorithm Objects\n                    </h4>\n                    <ul style={{ listStyle: 'none', padding: 0, lineHeight: '1.6' }}>\n                        <li style={{ marginBottom: '8px', fontSize: '0.95em' }}>\n                        <strong>Narrow Search Space:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].SpaceRefiner},&nbsp;\n                        <strong>Initialization:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].Sampler},&nbsp;\n                        <strong>Pre-train:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].Pretrain},&nbsp;\n                        <strong>Surrogate Model:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].Model},&nbsp;\n                        <strong>Acquisition Function:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].ACF},&nbsp;\n                        <strong>Normalizer:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].Normalizer}\n                        </li>\n                        <li style={{ marginBottom: '8px', fontSize: '0.95em' }}>\n                        <strong>DatasetSelector:</strong> {this.state.tasksInfo[this.state.selectedTaskIndex].DatasetSelector}\n                        </li>\n                    </ul>\n                </section>\n\n                <section style={{ marginBottom: '20px', paddingBottom: '10px' }}>\n                    <h4 style={{ color: '#333', marginBottom: '10px', fontSize: '1.2em', fontWeight: 'bold' }}>\n                        Auxilliary Data List\n                    </h4>\n\n                    <div style={{ marginBottom: '10px', fontSize: '0.95em' }}>\n                        <strong>Narrow Search Space:</strong>\n                        <ul style={{ listStyle: 'square', paddingLeft: '20px', lineHeight: '1.6' }}>\n                        {this.state.tasksInfo[this.state.selectedTaskIndex].metadata.SpaceRefiner.map((dataset, index) => (\n                            <li key={index}>{dataset}</li>\n                        ))}\n                        </ul>\n                    </div>\n\n                    <div style={{ marginBottom: '10px', fontSize: '0.95em' }}>\n                        <strong>Initialization:</strong>\n                        <ul style={{ listStyle: 'square', paddingLeft: '20px', lineHeight: '1.6' }}>\n                        {this.state.tasksInfo[this.state.selectedTaskIndex].metadata.Sampler.map((dataset, index) => (\n                            <li key={index}>{dataset}</li>\n                        ))}\n                        </ul>\n                    </div>\n\n                    <div style={{ marginBottom: '10px', fontSize: '0.95em' }}>\n                        <strong>Pre-train:</strong>\n                        <ul style={{ listStyle: 'square', paddingLeft: '20px', lineHeight: '1.6' }}>\n                        {this.state.tasksInfo[this.state.selectedTaskIndex].metadata.Pretrain.map((dataset, index) => (\n                            <li key={index}>{dataset}</li>\n                        ))}\n                        </ul>\n                    </div>\n\n                    <div style={{ marginBottom: '10px', fontSize: '0.95em' }}>\n                        <strong>Surrogate Model:</strong>\n                        <ul style={{ listStyle: 'square', paddingLeft: '20px', lineHeight: '1.6' }}>\n                        {this.state.tasksInfo[this.state.selectedTaskIndex].metadata.Model.map((dataset, index) => (\n                            <li key={index}>{dataset}</li>\n                        ))}\n                        </ul>\n                    </div>\n\n                    <div style={{ marginBottom: '10px', fontSize: '0.95em' }}>\n                        <strong>Acquisition Function:</strong>\n                        <ul style={{ listStyle: 'square', paddingLeft: '20px', lineHeight: '1.6' }}>\n                        {this.state.tasksInfo[this.state.selectedTaskIndex].metadata.ACF.map((dataset, index) => (\n                            <li key={index}>{dataset}</li>\n                        ))}\n                        </ul>\n                    </div>\n\n                    <div style={{ marginBottom: '10px', fontSize: '0.95em' }}>\n                        <strong>Normalizer:</strong>\n                        <ul style={{ listStyle: 'square', paddingLeft: '20px', lineHeight: '1.6' }}>\n                        {this.state.tasksInfo[this.state.selectedTaskIndex].metadata.Normalizer.map((dataset, index) => (\n                            <li key={index}>{dataset}</li>\n                        ))}\n                        </ul>\n                    </div>\n                </section>\n\n                    </div>\n            </div>\n            </TitleCard>\n\n\n              <div className=\"grid lg:grid-cols-3 mt-4 grid-cols-1 gap-6\">\n                <LineChart  TrajectoryData={this.state.TrajectoryData}/>\n                <BarChart ImportanceData={this.state.Importance}/>\n                <Footprint ScatterData={this.state.ScatterData}/>\n              </div>\n          \n\n            </>\n          );          \n    }\n  }\n}\n\nexport default Dashboard;\n"
  },
  {
    "path": "webui/src/features/documentation/DocComponents.js",
    "content": "import { useEffect, useState } from \"react\"\nimport { useDispatch } from \"react-redux\"\nimport TitleCard from \"../../components/Cards/TitleCard\"\nimport { setPageTitle, showNotification } from \"../common/headerSlice\"\nimport DocComponentsNav from \"./components/DocComponentsNav\"\nimport ReadMe from \"./components/GettingStartedContent\"\nimport DocComponentsContent from \"./components/DocComponentsContent\"\nimport FeaturesNav from \"./components/FeaturesNav\"\nimport FeaturesContent from \"./components/FeaturesContent\"\n\n\n\nfunction DocComponents(){\n\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Documentation\"}))\n      }, [])\n\n\n    return(\n        <>\n            <div className=\"bg-base-100  flex overflow-hidden  rounded-lg\" style={{height : \"82vh\"}}>\n                    <div className=\"flex-none p-4\">\n                        <DocComponentsNav activeIndex={1}/>\n                    </div>\n\n                    <div className=\"grow pt-16  overflow-y-scroll\">\n                        <DocComponentsContent />\n                    </div>\n\n                </div>\n           \n        </>\n    )\n}\n\nexport default DocComponents"
  },
  {
    "path": "webui/src/features/documentation/DocFeatures.js",
    "content": "import { useEffect, useState } from \"react\"\nimport { useDispatch } from \"react-redux\"\nimport TitleCard from \"../../components/Cards/TitleCard\"\nimport { setPageTitle, showNotification } from \"../common/headerSlice\"\nimport GettingStartedNav from \"./components/GettingStartedNav\"\nimport ReadMe from \"./components/GettingStartedContent\"\nimport GettingStartedContent from \"./components/GettingStartedContent\"\nimport FeaturesNav from \"./components/FeaturesNav\"\nimport FeaturesContent from \"./components/FeaturesContent\"\n\n\n\nfunction Features(){\n\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Documentation\"}))\n      }, [])\n\n\n    return(\n        <>\n            <div className=\"bg-base-100  flex overflow-hidden  rounded-lg\" style={{height : \"82vh\"}}>\n                    <div className=\"flex-none p-4\">\n                        <FeaturesNav activeIndex={1}/>\n                    </div>\n\n                    <div className=\"grow pt-16  overflow-y-scroll\">\n                        <FeaturesContent />\n                    </div>\n\n                </div>\n           \n        </>\n    )\n}\n\nexport default Features"
  },
  {
    "path": "webui/src/features/documentation/DocGettingStarted.js",
    "content": "import { useEffect, useState } from \"react\"\nimport { useDispatch } from \"react-redux\"\nimport TitleCard from \"../../components/Cards/TitleCard\"\nimport { setPageTitle, showNotification } from \"../common/headerSlice\"\nimport GettingStartedNav from \"./components/GettingStartedNav\"\nimport ReadMe from \"./components/GettingStartedContent\"\nimport GettingStartedContent from \"./components/GettingStartedContent\"\n\n\n\nfunction GettingStarted(){\n\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Documentation\"}))\n      }, [])\n\n\n    return(\n        <>\n            <div className=\"bg-base-100  flex overflow-hidden  rounded-lg\" style={{height : \"82vh\"}}>\n                    <div className=\"flex-none p-4\">\n                        <GettingStartedNav activeIndex={1}/>\n                    </div>\n\n                    <div className=\"grow pt-16  overflow-y-scroll\">\n                        <GettingStartedContent />\n                    </div>\n\n                </div>\n           \n        </>\n    )\n}\n\nexport default GettingStarted"
  },
  {
    "path": "webui/src/features/documentation/components/DocComponentsContent.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport InputText from '../../../components/Input/InputText'\nimport Title from '../../../components/Typography/Title'\nimport Subtitle from '../../../components/Typography/Subtitle'\nimport ErrorText from '../../../components/Typography/ErrorText'\nimport HelperText from '../../../components/Typography/HelperText'\n\nimport { setPageTitle, showNotification } from '../../common/headerSlice'\nimport TitleCard from '../../../components/Cards/TitleCard'\n\nfunction DocComponentsContent(){\n\n    const dispatch = useDispatch()\n\n    const updateFormValue = () => {\n        // Dummy function for input text component\n    }\n\n    return(\n        <>\n            <article className=\"prose\">\n              <h1 className=\"\" >Components</h1>\n\n                We have added some global components that are used commonly inside the project.\n\n                {/* Typography*/}\n              <h2 id=\"component1\">Typography</h2>\n                <div>\n                    These components are present under <span className=\"badge mt-0 mb-0 badge-ghost\">/components/Typography</span> folder. It accepts styleClass as props which can be used to pass additional className for style. It has following components which you can import and use it - \n                    <div className=\"mockup-code mt-4\">\n                    <pre className='my-0 py-0'><code>{'import  Title from \"../components/Typography/Title\"\\n  <Title>Your Title here</Title>'}</code></pre>\n                    </div>\n                    <ul>\n                      <li><span className='font-bold'>Title</span> - Use this component to show title \n                      <Title>Title Example</Title>\n                       </li>\n                      <li><span className='font-bold'>Subtitle</span> - Component that shows text smaller than title \n                      <Subtitle styleClass=\"mt-4 mb-6\">Subtitle Example</Subtitle>\n                      </li>\n                      <li><span className='font-bold'>ErrorText</span> - Used for showing error messages \n                      <ErrorText styleClass=\"mt-2\">Error Text Example</ErrorText>\n                      </li>\n                      <li><span className='font-bold'>HelperText</span> - Used for showing secondary message \n                      <HelperText styleClass=\"\">Helper Text Example</HelperText></li>\n                    </ul>\n                </div>\n\n\n                 {/* Form Input*/}\n              <h2 id=\"component2\">Form Input</h2>\n                <p>\n                      Many times we have to use form input like text, select one or toogle and in every file we have to handle its state management, here we have added global form component that can be used in any file and state variables can be managed by passing props to it. It is present in <span className=\"badge mt-0 mb-0 badge-ghost\">/components/Input</span> folder. \n                </p>\n                Ex- \n                <div className=\"mockup-code mt-4\">\n                    <pre className='my-0 py-0'><code>{'const INITIAL_LEAD_OBJ = {\\n   first_name : \"\", \\n   last_name : \"\", \\n   email : \"\" \\n  } \\n   const [leadObj, setLeadObj] = useState(INITIAL_LEAD_OBJ) \\n   const updateFormValue = ({updateType, value}) => {\\n    setErrorMessage(\"\") \\n    setLeadObj({...leadObj, [updateType] : value})\\n   }\\n\\n<InputText type=\"text\" defaultValue={leadObj.first_name}  \\n  updateType=\"first_name\" containerStyle=\"mt-4\"  \\n  labelTitle=\"First Name\" updateFormValue={updateFormValue}/>'}</code></pre>\n                </div>\n                <InputText type=\"text\" defaultValue={\"input value\"}  updateType=\"first_name\" containerStyle=\"mt-3\" labelTitle=\"Label Title\" updateFormValue={updateFormValue}/>\n                \n\n               <p> This example is from add new lead modal, here we are importing component for creating text input and passing some props to handle its content and state variable. Description of props are as follows - </p>\n                <ul>\n                  <li><span className='font-bold'>type</span> - Input type value like number, date, time etc.. </li>\n                  <li><span className='font-bold'>updateType</span> - This is used to update state variable in parent component</li>\n                  <li><span className='font-bold'>containerStyle</span> - Style class for container of input, which include label as well</li>\n                  <li><span className='font-bold'>labelTitle</span> - Title of the label</li>\n                  <li><span className='font-bold'>updateFormValue</span> - Function of parent component to update state variable</li>\n                </ul>\n            \n\n\n\n                 {/* Cards */}\n                 <h2 id=\"component3\">Cards</h2>\n                <p>\n                    <a href=\"https://daisyui.com/components/card/\" target=\"_blank\">Daisy UI</a> already have many cards layout, on top of that we have added one card component that accept title props and shows children inside its body. Also there is a divider between title and body of card. On more provision has been added to add buttons on top left side of card using TopSideButtons props (check leads page).\n\n                </p>\n                Ex - \n                <div className=\"mockup-code mt-4\">\n                    <pre className='my-0 py-0'><code>{'<TitleCard title={\"Card Title\"}> <h1>Card Body</h1></TitleCard>'}</code></pre>\n                </div>\n                <div className='p-8 bg-base-300 rounded-lg mt-4'>\n                    <TitleCard title={\"Card Title\"}> <h1>Card Body</h1></TitleCard>\n                </div>\n\n\n                 \n\n                    <div className='h-24'></div>\n\n\n            </article>\n        </>\n    )\n}\n\nexport default DocComponentsContent"
  },
  {
    "path": "webui/src/features/documentation/components/DocComponentsNav.js",
    "content": "import { useState } from \"react\"\n\nfunction DocComponentsNav({activeIndex}){\n\n    const SECTION_NAVS = [\n        {name : \"Typography\", isActive : activeIndex === 1 ? true : false},\n        {name : \"Form Input\", isActive : false},\n        {name : \"Cards\", isActive : false},\n    ]\n    const [navs, setNavs] = useState(SECTION_NAVS)\n\n    const scrollToSection = (currentIndex) => {\n        setNavs(navs.map((n, k) => {\n            if(k === currentIndex)return {...n, isActive : true}\n            else return {...n, isActive : false}\n        }))\n        document.getElementById('component'+(currentIndex+1)).scrollIntoView({behavior: 'smooth' })\n    }\n\n    return(\n        <ul className=\"menu w-56 mt-10 text-sm\">\n            <li className=\"menu-title\"><span className=\"\">Components</span></li>\n            \n            {\n                navs.map((n, k) => {\n                    return(\n                        <li key={k} onClick={() => scrollToSection(k)} className={n.isActive ? \"bordered\" : \"\"}><a>{n.name}</a></li>\n                    )\n                })\n            }\n        </ul>\n    )\n}\n\nexport default DocComponentsNav"
  },
  {
    "path": "webui/src/features/documentation/components/FeaturesContent.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport Subtitle from '../../../components/Typography/Subtitle'\nimport { setPageTitle, showNotification } from '../../common/headerSlice'\n\nfunction FeaturesContent(){\n\n    const dispatch = useDispatch()\n\n    return(\n        <>\n            <article className=\"prose\">\n              <h1 className=\"\">Features</h1>\n\n\n\n                {/* Authentication*/}\n              <h2 id=\"feature1\">Authentication</h2>\n                <p>\n                   JWT based Authentication logic is present in <span className=\"badge mt-0 mb-0 badge-ghost\">/app/auth.js</span>. In the file you can see we are adding bearer token in header for every request. Every routes under <span className=\"badge mt-0 mb-0 badge-ghost\">/routes/</span> folder will need authentication. For public routes like login, register you will have to add routes in <span className=\"badge mt-0 mb-0 badge-ghost\">App.js</span> file and also include the path in PUBLIC_ROUTES variable under <span className=\"badge mt-0 mb-0 badge-ghost\">/app/auth.js</span> file so that auto redirect to login page is not triggered.\n                   \n                </p>\n\n\n\n\n                   {/* Left Sidebar*/}\n              <h2 id=\"feature2\">Left Sidebar</h2>\n                  <p>\n                      This is main internal navigation (for pages that will come after login only), all sidebar menu items with their icons are present in <span className=\"badge mt-0 mb-0 badge-ghost\">/routes/sidebar.js</span>  file, while  path and page components mapping are respectively present in <span className=\"badge mt-0 mb-0 badge-ghost\">/routes/index.js</span> file.\n                    </p>\n\n\n\n                {/* Add New Page*/}\n            <h2 id=\"feature3\">Add New Page</h2>\n                <p>All <span className='font-semibold'>public routes</span> are present in <span className=\"badge mt-0 mb-0 badge-ghost\">App.js</span> file. Steps to add new public page - \n                </p>\n\n                <ul className='mt-0'>\n                        <li>Create Page inside <span className=\"badge mt-0 mb-0 badge-ghost\">/pages</span> folder</li>\n                        <li>Go to <span className=\"badge mt-0 mb-0 badge-ghost\">App.js</span> and import the component and add its path</li>\n                        <li>Add your new route path in <span className=\"badge mt-0 mb-0 badge-ghost\">/app/auth.js</span> file under PUBLIC_ROUTES variable, this will allow the page to open without login.</li>\n                </ul>\n\n                <p className='mt-4'>All <span className='font-semibold'>protected routes</span> are present in <span className=\"badge mt-0 mb-0 badge-ghost\">/routes/sidebar.js</span> file</p>\n\n                <ul className='mt-0'>\n                        <li>Create your page inside <span className=\"badge mt-0 mb-0 badge-ghost\">/pages/protected</span> folder</li>\n                        <li>Add your new routes in <span className=\"badge mt-0 mb-0 badge-ghost\">/routes/sidebar.js</span>, this will show your new page in sidebar</li>\n                        <li>Import your new routes component and map its path in <span className=\"badge mt-0 mb-0 badge-ghost\">/routes/index.js</span></li>\n                 </ul>\n\n\n\n              {/* Right Sidebar*/}\n              <h2 id=\"feature4\">Right Sidebar</h2>\n                    <div>\n                        This is used for showing long list contents like notifications, settings etc.. We are using redux to show and hide and it is single component and can be called from any file with dispatch method.\n                        To add new content follow following steps:\n                        <ul>\n                          <li>Create new component file containing main body of your content</li>\n                          <li>Create new variable in <span className=\"badge mt-0 mb-0 badge-ghost\">/utils/globalConstantUtils.js</span> file under RIGHT_DRAWER_TYPES variable</li>\n                          <li>Now include the file mapped with the new variable in <span className=\"badge mt-0 mb-0 badge-ghost\">/containers/RightSidebar.js</span> file using switch. <br />\n                           For ex- If you new component name is <span className=\"badge mt-0 mb-0 badge-ghost\">TestRightSideBar.js</span> and  variable name is TEST_RIGHT_SIDEBAR, then add following code inside switch code block\n                          <br />\n                          <div className=\"mockup-code mt-4\">\n                                <pre className='my-0 py-0'><code>{`[RIGHT_DRAWER_TYPES.TEST_RIGHT_SIDEBAR] : \\n<TestRightSideBar {...extraObject} closeRightDrawer={close}/>`}</code></pre>\n                          </div>\n                          <span className='text-sm mt-1 italic'>Here extraObject have variables that is passed from parent component while calling openRightDrawer method</span>\n                          </li>\n                          <li>Now the last step, call dispatch method as follows\n                          <div className=\"mockup-code mt-1\">\n                                <pre className='my-0 py-0'><code>{'import { useDispatch } from \"react-redux\"\\n  const dispatch = useDispatch()\\n  dispatch(openRightDrawer({header : \"Test Right Drawer\", \\n  bodyType : RIGHT_DRAWER_TYPES.TEST_RIGHT_SIDEBAR}))'}</code></pre>\n                          </div> \n                          </li>\n                        </ul>\n                    </div>\n\n\n                    {/* Themes*/}\n              <h2 id=\"feature5\">Themes</h2>\n                <p>\n                By default we have added light and dark theme and Daisy UI comes with a number of themes, which you can use with no extra effort, you just have to include it in <span className=\"badge mt-0 mb-0 badge-ghost\">tailwind.config.js</span> file,  you can add themes like cupcake, corporate, reto etc... Also we can configure themes colors in config file, for more documentation on themes checkout <a href=\"https://daisyui.com/docs/themes/\" target=\"_blank\">Daisy UI documentation.</a>\n                </p>\n\n\n\n\n                    {/* Modal*/}\n              <h2 id=\"feature6\">Modal</h2>\n                  <div>\n                        With global modal functionality you dont have to create seperate modal for each page. We are using redux to show and hide and it is a single component and can be called from any file with dispatch method.\n                        Code for showing modal is present in modalSlice and layout container component. To show modal just call openModal() function of modalSlice using dispatch.\n                        <br />\n                        To add new modal in any page follow following steps:\n                        <ul>\n                          <li>Create new component file containing main body of your modal content</li>\n                          <li>Create new variable in <span className=\"badge mt-0 mb-0 badge-ghost\">/utils/globalConstantUtils.js</span> file under MODAL_BODY_TYPES variable</li>\n                          <li>Now include the file mapped with the new variable in <span className=\"badge mt-0 mb-0 badge-ghost\">/containers/ModalLayout.js</span> file using switch. <br />\n                           For ex- If you new component name is <span className=\"badge mt-0 mb-0 badge-ghost\">TestModal.js</span> and  variable name is TEST_MODAL, then add following code inside switch code block\n                          <br />\n                          <div className=\"mockup-code mt-4\">\n                                <pre className='my-0 py-0'><code>{`[RIGHT_DRAWER_TYPES.TEST_MODAL] : \\n<TestModal closeModal={close} extraObject={extraObject}/>`}</code></pre>\n                          </div>\n                          <span className='text-sm mt-1 italic'>Here extraObject have variables that is passed from parent component while calling openModal method</span>\n                          </li>\n                          <li>Now the last step, call dispatch method as follows\n                          <div className=\"mockup-code mt-1\">\n                                <pre className='my-0 py-0'><code>{'import { useDispatch } from \"react-redux\"\\n  const dispatch = useDispatch()\\n   dispatch(openModal({title : \"Test Modal Title\", \\n   bodyType : MODAL_BODY_TYPES.TEST_MODAL}))'}</code></pre>\n                          </div> \n                          </li>\n                        </ul>\n                    </div>\n\n\n\n\n                 \n\n\n                  {/* Notification*/}\n                  <h2 id=\"feature7\">Notification</h2>\n                  <p>Many times we have to show notification to user be it on successfull form submission or any api success. And requirement can come to show notification from any page, so global notification handling is needed.</p>\n\n                    <p className='mt-4'>Code for showing notification is present in headerSlice and layout container component. To show notification just call <span className='badge badge-ghost'>showNotification()</span> function of headerSlice using dispatch. To show success message notification pass status as 1 and for showing error message pass status as 0.</p> \n\n                    <div className=\"mockup-code mb-4\">\n                          <pre className='my-0 py-0'><code>{'import { useDispatch } from \"react-redux\"\\n  const dispatch = useDispatch()\\n  dispatch(showNotification({message : \"Message here\", status : 1}))'}</code></pre>\n                    </div> \n\n                    <p>Click on this button to check</p>\n\n                    <button className='btn btn-success' onClick={() => dispatch(showNotification({message : \"Your message has been sent!\", status : 1}))}>Success</button>\n\n                    <button className='btn btn-error ml-4' onClick={() => dispatch(showNotification({message : \"Something went wrong!\", status : 0}))}>Error</button>\n\n\n                    <div className='h-24'></div>\n\n\n            </article>\n        </>\n    )\n}\n\nexport default FeaturesContent"
  },
  {
    "path": "webui/src/features/documentation/components/FeaturesNav.js",
    "content": "import { useState } from \"react\"\n\nfunction FeaturesNav({activeIndex}){\n\n    const SECTION_NAVS = [\n        {name : \"Authentication\", isActive : activeIndex === 1 ? true : false},\n        {name : \"Sidebar\", isActive : false},\n        {name : \"Add New Page\", isActive : false},\n        {name : \"Right sidebar\", isActive : false},\n        {name : \"Themes\", isActive : false},\n        {name : \"Modal\", isActive : false},\n        {name : \"Notification\", isActive : false},\n    ]\n    const [navs, setNavs] = useState(SECTION_NAVS)\n\n    const scrollToSection = (currentIndex) => {\n        setNavs(navs.map((n, k) => {\n            if(k === currentIndex)return {...n, isActive : true}\n            else return {...n, isActive : false}\n        }))\n        document.getElementById('feature'+(currentIndex+1)).scrollIntoView({behavior: 'smooth' })\n    }\n\n    return(\n        <ul className=\"menu w-56 mt-10 text-sm\">\n            <li className=\"menu-title\"><span className=\"\">Features</span></li>\n            \n            {\n                navs.map((n, k) => {\n                    return(\n                        <li key={k} onClick={() => scrollToSection(k)} className={n.isActive ? \"bordered\" : \"\"}><a>{n.name}</a></li>\n                    )\n                })\n            }\n        </ul>\n    )\n}\n\nexport default FeaturesNav"
  },
  {
    "path": "webui/src/features/documentation/components/GettingStartedContent.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport Subtitle from '../../../components/Typography/Subtitle'\nimport { setPageTitle } from '../../common/headerSlice'\n\nfunction GettingStartedContent(){\n\n    const dispatch = useDispatch()\n\n\n\n    return(\n        <>\n            <article className=\"prose\">\n              <h1 className=\"\">Getting Started</h1>\n\n\n              {/* Introduction */}\n              <h2 className=\"\" id=\"getstarted1\">Introduction</h2>\n              <p>A free dashboard template using <span className='font-bold'>Daisy UI</span> and react js. With the help of Dasisy UI, it comes with <span className='font-bold'>fully customizable and themable CSS</span> and power of Tailwind CSS utility classes. We have also added <span className='font-bold'>redux toolkit</span>  and configured it for API calls and state management.</p> \n              <p>User authentication has been implemented using JWT token method (ofcourse you need backend API for generating and verifying token). This template can be used to start your next SaaS project or build new internal tools in your company.</p>\n              <h4> Core libraries used - </h4>\n              <ul>\n                  <li><a href=\"https://reactjs.org/\" target=\"_blank\">React JS v18.2.0</a></li>\n                  <li><a href=\"https://reactrouter.com/en/main\" target=\"_blank\">React Router v6.4.3</a></li>\n                  <li><a href=\"https://tailwindcss.com/\" target=\"_blank\">Tailwind CSS v3.3.6</a></li>\n                  <li><a href=\"https://daisyui.com/\" target=\"_blank\">Daisy UI v4.4.19</a></li>\n                  <li><a href=\"https://heroicons.com/\" target=\"_blank\">HeroIcons v2.0.13</a></li>\n                  <li><a href=\"https://redux-toolkit.js.org/\" target=\"_blank\">Redux toolkit v1.9.0</a></li>\n                  <li><a href=\"https://react-chartjs-2.js.org/\" target=\"_blank\">React ChartJS 2 v5.0.1</a></li>\n              </ul>\n              <h4>Major features - </h4>\n              <p className=''>Almost all major UI components are available in Daisy UI library. Apart from this logic has been added for following - </p>\n              <ul>\n                  <li> <span className='font-bold'>Light/dark</span> mode toggle</li>\n                  <li> Token based user authentication</li>\n                  <li> <span className='font-bold'>Submenu support</span> in sidebar</li>\n                  <li> Store management using <span className='font-bold'>redux toolkit</span></li>\n                  <li> <span className='font-bold'>Daisy UI</span> components</li>\n                  <li> <span className='font-bold'>Right and left sidebar</span>, Universal loader, notifications and other components</li>\n                  <li> React <span className='font-bold'>chart js 2</span> examples</li>\n              </ul>\n              \n\n\n\n\n              {/* How to Use */}\n              <h2 id=\"getstarted2\">How to use?</h2>\n                <p>\n                    Just clone the repo from github and then run following command (Make sure you have node js installed )<br/>\n                    <a href=\"https://github.com/srobbin01/daisyui-admin-dashboard-template\" className='text-sm text-blue-500' target=\"_blank\">Repo Link</a>\n                    <br />\n                    <code> npm install </code><br />\n                    <code>npm start</code>\n                </p>\n\n\n              {/* Tailwind CSS*/}\n              <h2 id=\"getstarted3\">Tailwind CSS</h2>\n                <p>\n                Tailwind CSS is a utility-first CSS framework with predefined classes that you can use to build and design the UI directly in the JSX. We have also included Daisy UI Component, that is based on tailwind CSS.\n                </p>\n\n              {/* Daisy UI */}\n              <h2 id=\"getstarted4\">Daisy UI</h2>\n\n              <p><a href=\"https://daisyui.com/\" target=\"_blank\" className='text-xl btn-link'>Daisy UI</a>, a popular free and opensource tailwind component library has been used for this template. It has a rich collection of components, layouts and is fully customizable and themeable.</p>\n              \n              <p>Apart from this it also helps in making HTML code more cleaner as we don't have to include all utility classes of tailwind to make the UI. Check components <a href=\"https://daisyui.com/components/button/\" target=\"_blank\" className='btn-link'>documentation here</a>. For Ex- </p>\n\n              <div className='text-center'>\n                <h2 className='text-xl font-bold mb-0.5'>Creating a button</h2>\n              </div>\n              <div className=\"\">\n\n                  <div className='text-center'>\n                        <p className='text-center font-semibold'> using only utility classes of tailwind</p>\n                        <div className=\"mockup-code text-justify mb-4\">\n                          <pre className='my-0 py-0'><code>{'<a className=\"inline-block px-4 py-3 \\n text-sm font-semibold text-center \\n text-white uppercase transition duration-200 \\n ease-in-out bg-indigo-600 \\n rounded-md cursor-pointer \\n hover:bg-indigo-700\">Button</a>'}</code></pre>\n                        </div> \n                        <button className=\"inline-block  px-4 py-3  text-sm font-semibold text-center  text-white uppercase transition duration-200  ease-in-out bg-indigo-600  rounded-md cursor-pointer  hover:bg-indigo-700\">Button</button>\n                  </div>\n\n                  <div className=\"divider\"></div> \n\n                  <div className='grid w-full flex-grow'>\n                      <p className='text-center font-semibold'>using daisyUI component classes</p>\n                        <div className=\"mockup-code mb-4\">\n                          <pre className='my-0 py-0'><code>{'<a className=\"btn btn-primary\">\\nButton</a>'}</code></pre>\n                        </div> \n                        <button className=\"btn btn-primary\">Button</button>\n                  </div>\n                </div>\n\n\n\n                   {/* Chart JS */}\n              <h2 id=\"getstarted5\">Chart JS</h2>\n                 <p>\n                 Chart JS library has rich components of different charts available. It is based on  <a href=\"https://www.chartjs.org/\" target=\"_blank\" alt=\"\"> Chart.js</a> library, the most popular charting library. We have added this library and added couple of examples in seperate page.\n                 </p>\n\n\n\n                  {/* Redux Toolkit */}\n              <h2 id=\"getstarted6\">Redux Toolkit</h2>\n                 <p>\n                 The Redux Toolkit package helps in writing redux logic easily. It was originally created to help address three common concerns about Redux:\n                    <li>Configuring a Redux store is too complicated</li>\n                    <li>I have to add a lot of packages to get Redux to do anything useful</li>\n                    <li>Redux requires too much boilerplate code\"</li>\n                    This library has been configured and used for showing notifications, modals and loading data from API in leads page.\n                 </p>\n\n\n                  {/* Hero Icons */}\n              <h2 id=\"getstarted7\">Hero Icons</h2>\n                <p><a href=\"https://heroicons.com/\" target=\"_blank\" className='text-xl btn-link'>HeroIcons</a> library has been used for all the icons in this templates. It has a rich collection of SVG icons, and is made by the makers of Tailwind CSS.</p>\n\n                <p className='mt-4'>Each icon can be imported individually as a React component, check <a href=\"https://github.com/tailwindlabs/heroicons\" target=\"_blank\" className='btn-link'>documentation</a></p>\n\n                <pre><code>{\"import BeakerIcon from '@heroicons/react/24/solid/BeakerIcon'\"}</code></pre>\n                <p>Use as follows in your component</p>\n                <pre><code>{\"<BeakerIcon className='h-6 w-6'/>\"}</code></pre>\n\n                <div className=\"divider \"></div>\n\n                <div className=\"alert mt-4 alert-warning shadow-lg\">\n                    <div><span>Note: Importing all icons in single line will increase your build time</span></div>\n                </div>\n\n                <p>Don't import like this (will load all icons and increase build time)</p>\n                <pre><code>{\"import {BeakerIcon, BellIcon } from '@heroicons/react/24/solid'\"}</code></pre>\n\n                <p>Instead import as follows</p>\n                <pre><code>{\"import BeakerIcon from '@heroicons/react/24/solid/BeakerIcon'\"}<br />\n                {\"import BellIcon from '@heroicons/react/24/solid/BellIcon'\"}</code></pre>\n\n                <div className=\"badge badge-secondary\">This is better way for importing icons</div>\n\n\n\n                 {/* Project Structure */}\n              <h2 id=\"getstarted8\">Project Structure</h2>\n              <h4>Folders - </h4>\n              <ul className='mt-0'>\n                  <li>app - store management, auth and libraries settings are present</li>\n                  <li>components - this include all common components to be used in project</li>\n                  <li>containers - components related to layout like sidebar, page layout, header etc..</li>\n                  <li>features - main folder where all page logic resides, there will be folder for each page and additional folder inside that to group different functionalities like components, modals etc... Redux slice file will also present inside page specific folder.</li>\n                  <li>pages - this contain one single file related to one page, if you want to divide page into different components file, use features folder and create seperate folder related to that page</li>\n                  <li>routes - all settings related to routes</li>\n                </ul>\n\n              <h4>Files - </h4>\n              <ul className='mt-0'>\n                    <li>App.js - Main file containing different routes and components </li>\n                    <li>index.css - Additional global css if required</li>\n                    <li>index.js - Entry point of project</li>\n                    <li>package.json - All dependencies and npm scripts</li>\n                    <li>tailwind.config.js - Tailwind CSS configuration file, add theme customization and new themes in this file</li>\n                </ul>\n\n\n                <div className='h-24'></div>\n\n            </article>\n        </>\n    )\n}\n\nexport default GettingStartedContent"
  },
  {
    "path": "webui/src/features/documentation/components/GettingStartedNav.js",
    "content": "import { useState } from \"react\"\n\nfunction GettingStartedNav({activeIndex}){\n\n    const SECTION_NAVS = [\n        {name : \"Introduction\", isActive : activeIndex === 1 ? true : false},\n        {name : \"How to Use\", isActive : false},\n        {name : \"Tailwind CSS\", isActive : false},\n        {name : \"Daisy UI\", isActive : false},\n        {name : \"Chart JS\", isActive : false},\n        {name : \"Redux Toolkit\", isActive : false},\n        {name : \"Hero Icons\", isActive : false},\n        {name : \"Project Structure\", isActive : false},\n    ]\n    const [navs, setNavs] = useState(SECTION_NAVS)\n\n    const scrollToSection = (currentIndex) => {\n        setNavs(navs.map((n, k) => {\n            if(k === currentIndex)return {...n, isActive : true}\n            else return {...n, isActive : false}\n        }))\n        document.getElementById('getstarted'+(currentIndex+1)).scrollIntoView({behavior: 'smooth' })\n    }\n\n    return(\n        <ul className=\"menu w-56 mt-10 text-sm\">\n            <li className=\"menu-title\"><span className=\"\">Getting Started</span></li>\n            \n            {\n                navs.map((n, k) => {\n                    return(\n                        <li key={k} onClick={() => scrollToSection(k)} className={n.isActive ? \"bordered\" : \"\"}><a>{n.name}</a></li>\n                    )\n                })\n            }\n        </ul>\n    )\n}\n\nexport default GettingStartedNav"
  },
  {
    "path": "webui/src/features/experiment/components/DashboardStats.js",
    "content": "function DashboardStats({title, icon, value, description, colorIndex}){\n\n    const COLORS = [\"primary\", \"primary\"]\n\n    const getDescStyle = () => {\n        if(description.includes(\"↗︎\"))return \"font-bold text-green-700 dark:text-green-300\"\n        else if(description.includes(\"↙\"))return \"font-bold text-rose-500 dark:text-red-400\"\n        else return \"\"\n    }\n\n    return(\n        <div className=\"stats shadow\">\n            <div className=\"stat\">\n                <div className={`stat-figure dark:text-slate-300 text-${COLORS[colorIndex%2]}`}>{icon}</div>\n                <div className=\"stat-title dark:text-slate-300\">{title}</div>\n                <div className={`stat-value dark:text-slate-300 text-${COLORS[colorIndex%2]}`}>{value}</div>\n                <div className={\"stat-desc  \" + getDescStyle()}>{description}</div>\n            </div>\n        </div>\n    )\n}\n\nexport default DashboardStats"
  },
  {
    "path": "webui/src/features/experiment/components/SearchData.js",
    "content": "import React, {useState} from \"react\";\n\nimport {\n    Row,\n    Col,\n    Button,\n    InputNumber,\n    Slider,\n    Space,\n    Input,\n    Form,\n    ConfigProvider,\n    Select,\n    Modal,\n} from \"antd\";\n\n\nfunction SearchData({set_dataset}) {\n  const [form] = Form.useForm()\n\n  const onFinish = (values) => {\n    const messageToSend = values;\n    console.log('Request data:', messageToSend);\n    // 向后端发送请求...\n    fetch('http://localhost:5001/api/configuration/search_dataset', {\n      method: 'POST',\n      headers: {\n        'Content-Type': 'application/json',\n      },\n      body: JSON.stringify(messageToSend), \n    })\n    .then(response => {\n      if (!response.ok) {\n        throw new Error('Network response was not ok');\n      } \n      return response.json();\n    })\n    .then(message => {\n      console.log('Message from back-end:', message);\n        set_dataset(message)\n      }\n    )\n    .catch((error) => {\n      console.error('Error sending message:', error);\n      var errorMessage = error.error;\n      Modal.error({\n        title: 'Information',\n        content: 'Error:' + errorMessage\n      })\n    });\n  }\n\n  return(\n    <ConfigProvider\n      theme={{\n        components: {\n          Input: {\n            addonBg:\"white\"\n          },\n        },\n      }}  \n    >\n    <Form\n      name=\"SearchData\"\n      form={form}\n      onFinish={onFinish}\n      style={{width:\"100%\"}}\n      autoComplete=\"off\"\n    >\n      <Space className=\"space\" style={{ display: 'flex'}} align=\"baseline\">\n        <Form.Item\n          name=\"task_name\"\n          style={{flexGrow: 1}}\n        >\n          <Input addonBefore={\"Dataset Name\"}/>\n        </Form.Item>\n        <Form.Item\n          name=\"num_variables\"\n          style={{flexGrow: 1}}\n        >\n          <Input addonBefore={\"Num of Variables\"}/>\n        </Form.Item>\n      </Space>\n      <Space className=\"space\" style={{ display: 'flex'}} align=\"baseline\">\n        <Form.Item\n          name=\"variables_name\"\n          style={{flexGrow: 1}}\n        >\n          <Input addonBefore={\"Variable Name\"}/>\n        </Form.Item>\n        <Form.Item\n          name=\"num_objectives\"\n          style={{flexGrow: 1}}\n        >\n          <Input addonBefore={\"Num of Objectives\"}/>\n        </Form.Item>\n      </Space>\n      <h6 style={{color:\"black\"}}>Search method:</h6>\n      <Space className=\"space\" style={{ display: 'flex'}} align=\"baseline\">\n      <Form.Item\n        name=\"search_method\"\n      >\n        <Select style={{minWidth: 150}}\n          options={[ {value: \"Hash\"},\n                      {value: \"Fuzzy\"},\n                      {value: \"LSH\"},\n                  ]}\n        />\n      </Form.Item>\n      <Form.Item>\n        <Button type=\"primary\" htmlType=\"submit\" style={{width:\"120px\"}}>\n          Search\n        </Button>\n      </Form.Item>\n      </Space>\n    </Form>\n    </ConfigProvider>\n  )\n}\n\nexport default SearchData"
  },
  {
    "path": "webui/src/features/experiment/components/SelectAlgorithm.js",
    "content": "import React, { useState } from \"react\";\nimport { PlusOutlined } from '@ant-design/icons';\nimport { Button, Form, Select, Drawer, Modal } from \"antd\";\nimport DashboardStats from './DashboardStats'; // 确保路径正确\nimport UserGroupIcon from '@heroicons/react/24/outline/UserGroupIcon';\nimport UsersIcon from '@heroicons/react/24/outline/UsersIcon';\nimport CircleStackIcon from '@heroicons/react/24/outline/CircleStackIcon';\nimport CreditCardIcon from '@heroicons/react/24/outline/CreditCardIcon';\n\nconst filterOption = (input, option) =>\n  (option?.value ?? '').toLowerCase().includes(input.toLowerCase());\n\nfunction SelectAlgorithm({ SpaceRefiner, Sampler, Pretrain, Model, ACF, Normalizer }) {\n  const [drawerVisible, setDrawerVisible] = useState(false);\n  const [form] = Form.useForm(); // Form instance to manage form submission in the drawer\n  const [selectedValues, setSelectedValues] = useState({\n    SpaceRefiner: SpaceRefiner[0]?.name || '',\n    Sampler: Sampler[0]?.name || '',\n    Pretrain: Pretrain[0]?.name || '',\n    Model: Model[0]?.name || '',\n    ACF: ACF[0]?.name || '',\n    Normalizer: Normalizer[0]?.name || '',\n    SpaceRefinerParameters: '',\n    SpaceRefinerDataSelector: 'None',\n    SpaceRefinerDataSelectorParameters: '',\n    SamplerParameters: '',\n    SamplerInitNum: '11',\n    SamplerDataSelector: 'None',\n    SamplerDataSelectorParameters: '',\n    PretrainParameters: '',\n    PretrainDataSelector: 'None',\n    PretrainDataSelectorParameters: '',\n    ModelParameters: '',\n    ModelDataSelector: 'None',\n    ModelDataSelectorParameters: '',\n    ACFParameters: '',\n    ACFDataSelector: 'None',\n    ACFDataSelectorParameters: '',\n    NormalizerParameters: '',\n    NormalizerDataSelector: 'None',\n    NormalizerDataSelectorParameters: '',\n  });\n\n  const [showDashboardStats, setShowDashboardStats] = useState(false);\n\n  const handleDrawerSubmit = () => {\n    form\n      .validateFields()\n      .then(formValues => {\n        // Combine selectedValues with formValues\n        const messageToSend = { ...selectedValues, ...formValues };\n  \n        fetch('http://localhost:5001/api/configuration/select_algorithm', {\n          method: 'POST',\n          headers: {\n            'Content-Type': 'application/json',\n          },\n          body: JSON.stringify(messageToSend),\n        })\n        .then(response => {\n          if (!response.ok) {\n            throw new Error('Network response was not ok');\n          }\n          return response.json();\n        })\n        .then(succeed => {\n          console.log('Message from back-end:', succeed);\n          Modal.success({\n            title: 'Information',\n            content: 'Submit successfully!',\n          });\n          setSelectedValues({ ...selectedValues, ...formValues }); // Update selectedValues with formValues\n          setShowDashboardStats(true);\n          form.resetFields(); // Reset the form fields after submission\n          setDrawerVisible(false); // Close the drawer\n        })\n        .catch(error => {\n          console.error('Error sending message:', error);\n          Modal.error({\n            title: 'Information',\n            content: 'Error: ' + error.message,\n          });\n        });\n      })\n      .catch(info => {\n        console.log('Validate Failed:', info);\n      });\n  };\n\n  const statsData = [\n    {title: \"Space Refiner\", value: selectedValues.SpaceRefiner || null, icon: <UserGroupIcon className='w-8 h-8'/>, description: \"Details of Space Refiner\"},\n    {title: \"Sampler\", value: selectedValues.Sampler || \"N/A\", icon: <UserGroupIcon className='w-8 h-8'/>, description: \"Details of Sampler\"},\n    {title: \"Pretrain\", value: selectedValues.Pretrain || null, icon: <UserGroupIcon className='w-8 h-8'/>, description: \"Details of Pretrain\"},\n    {title: \"Model\", value: selectedValues.Model || \"N/A\", icon: <UserGroupIcon className='w-8 h-8'/>, description: \"Details of Model\"},\n    {title: \"Acquisition Function\", value: selectedValues.ACF || \"N/A\", icon: <UserGroupIcon className='w-8 h-8'/>, description: \"Details of Acquisition Function\"},\n    {title: \"Normalizer\", value: selectedValues.Normalizer || \"N/A\", icon: <UserGroupIcon className='w-8 h-8'/>, description: \"Details of Normalizer\"},\n  ].filter(stat => stat.value !== null && stat.value !== \"N/A\"); // Filter out entries with null or \"N/A\" values\n\n  return (\n    <>\n      <Button type=\"primary\" onClick={() => setDrawerVisible(true)} icon={<PlusOutlined />} style={{ width: \"150px\" }}>\n        Start building\n      </Button>\n\n      <Drawer\n        title=\"Select Algorithm\"\n        placement=\"right\"\n        closable={true}\n        onClose={() => setDrawerVisible(false)}\n        visible={drawerVisible}\n        width={720}\n      >\n        <Form\n          form={form}\n          name=\"drawer_form\"\n          onFinish={handleDrawerSubmit}\n          style={{ width: \"100%\" }}\n          autoComplete=\"off\"\n          initialValues={selectedValues}\n        >\n          <Form.Item\n            name=\"SpaceRefiner\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>Space Refiner</span>}\n            rules={[{ required: true, message: 'Please select a Space Refiner!' }]}\n          >\n            <Select\n              showSearch\n              placeholder=\"Space Refiner\"\n              optionFilterProp=\"value\"\n              filterOption={filterOption}\n              style={{ fontSize: '14px', width: '300px' }}\n              options={SpaceRefiner.map(item => ({ value: item.name }))}\n            />\n          </Form.Item>\n          <Form.Item\n            name=\"Sampler\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>Sampler</span>}\n            rules={[{ required: true, message: 'Please select a Sampler!' }]}\n          >\n            <Select\n              showSearch\n              placeholder=\"Sampler\"\n              optionFilterProp=\"value\"\n              filterOption={filterOption}\n              style={{ fontSize: '14px', width: '300px' }}\n              options={Sampler.map(item => ({ value: item.name }))}\n            />\n          </Form.Item>\n          <Form.Item\n            name=\"Pretrain\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>Pretrain</span>}\n            rules={[{ required: true, message: 'Please select a Pretrain!' }]}\n          >\n            <Select\n              showSearch\n              placeholder=\"Pretrain\"\n              optionFilterProp=\"value\"\n              filterOption={filterOption}\n              style={{ fontSize: '14px', width: '300px' }}\n              options={Pretrain.map(item => ({ value: item.name }))}\n            />\n          </Form.Item>\n          <Form.Item\n            name=\"Model\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>Model</span>}\n            rules={[{ required: true, message: 'Please select a Model!' }]}\n          >\n            <Select\n              showSearch\n              placeholder=\"Model\"\n              optionFilterProp=\"value\"\n              filterOption={filterOption}\n              style={{ fontSize: '14px', width: '300px' }}\n              options={Model.map(item => ({ value: item.name }))}\n            />\n          </Form.Item>\n          <Form.Item\n            name=\"ACF\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>ACF</span>}\n            rules={[{ required: true, message: 'Please select an ACF!' }]}\n          >\n            <Select\n              showSearch\n              placeholder=\"ACF\"\n              optionFilterProp=\"value\"\n              filterOption={filterOption}\n              style={{ fontSize: '14px', width: '300px' }}\n              options={ACF.map(item => ({ value: item.name }))}\n            />\n          </Form.Item>\n          <Form.Item\n            name=\"Normalizer\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>Normalizer</span>}\n            rules={[{ required: true, message: 'Please select a Normalizer!' }]}\n          >\n            <Select\n              showSearch\n              placeholder=\"Normalizer\"\n              optionFilterProp=\"value\"\n              filterOption={filterOption}\n              style={{ fontSize: '14px', width: '300px' }}\n              options={Normalizer.map(item => ({ value: item.name }))}\n            />\n          </Form.Item>\n\n          <Form.Item>\n            <Button type=\"primary\" htmlType=\"submit\" style={{ width: \"150px\" }}>\n              Apply\n            </Button>\n          </Form.Item>\n        </Form>\n      </Drawer>\n\n      {showDashboardStats && (\n        <div className=\"grid lg:grid-cols-3 md:grid-cols-2 sm:grid-cols-1 gap-6 mt-6\">\n          {statsData.map((d, k) => (\n            <DashboardStats key={k} {...d} colorIndex={k} />\n          ))}\n        </div>\n      )}\n    </>\n  );\n}\n\nexport default SelectAlgorithm;\n"
  },
  {
    "path": "webui/src/features/experiment/components/SelectData.js",
    "content": "import React, {useState} from \"react\";\n\nimport {\n    Button,\n    Checkbox,\n    ConfigProvider,\n    Modal,\n    Select,\n    Input,\n} from \"antd\";\n\nconst CheckboxGroup = Checkbox.Group;\n\nfunction SelectData({DatasetData, updateTable, DatasetSelector}) {\n    var data = []\n    if (DatasetData.isExact) {\n      data = [DatasetData.datasets.name]\n    } else {\n      data = DatasetData.datasets\n    }\n    const [checkedList, setCheckedList] = useState([]);\n    const [selectedOption, setSelectedOption] = useState();\n    const [selector, setSelector] = useState(\"None\");\n    const [parameter, setParameter] = useState(\"\");\n    const checkAll = data.length === checkedList.length;\n    const indeterminate = checkedList.length > 0 && checkedList.length < data.length;\n    const onChange = (list) => {\n        setCheckedList(list);\n    };\n    const onCheckAllChange = (e) => {\n        setCheckedList(e.target.checked ? data : []);\n    };\n    const handleSelectChange = (value) => {\n      setSelectedOption(value); // 当选择发生变化时更新选项\n    };\n    const handleSelectorChange = (value) => {\n      setSelector(value); \n    };\n    const handleParameterChange = (event) => {\n      setParameter(event.target.value);\n    };\n    const handleClick = () => {\n      const datasetList = checkedList.map(item => {\n        return item;\n      });\n      const messageToSend = {\n        object: selectedOption,\n        DatasetSelector: selector,\n        parameter: parameter,\n        datasets: datasetList,\n      }\n      updateTable(messageToSend)\n      console.log(messageToSend)\n      fetch('http://localhost:5001/api/configuration/dataset', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(succeed => {\n        console.log('Message from back-end:', succeed);\n        Modal.success({\n          title: 'Information',\n          content: 'Submit successfully!'\n        })\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n        var errorMessage = error.error;\n        Modal.error({\n          title: 'Information',\n          content: 'Error:' + errorMessage\n        })\n      });\n    }\n\n    const handleDelete = () => {\n      const datasetList = checkedList.map(item => {\n        return item;\n      });\n      const messageToSend = {\n        datasets: datasetList,\n      }\n      console.log(messageToSend)\n      fetch('http://localhost:5001/api/configuration/delete_dataset', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(succeed => {\n        console.log('Message from back-end:', succeed);\n        datasetList.forEach(item => {\n          let index = data.indexOf(item);\n          if (index !== -1) {\n            data.splice(index, 1);\n          }\n        });\n        var newDataset = {\"isExact\": false, \"datasets\": data}\n        console.log(\"new dataset:\", newDataset)\n        // set_dataset(newDataset)\n        Modal.success({\n          title: 'Information',\n          content: 'Delete successfully!'\n        })\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n        var errorMessage = error.error;\n        Modal.error({\n          title: 'Information',\n          content: 'Error:' + errorMessage\n        })\n      });\n    }\n\n    return(\n        <ConfigProvider\n          theme={{\n            components: {\n              Checkbox: {\n                colorText:\"black\"\n              },\n            },\n          }}        \n        >\n          <div style={{ overflowY: 'auto', maxHeight: '300px' }}>\n            <Checkbox indeterminate={indeterminate} onChange={onCheckAllChange} checked={checkAll}>\n                Check all\n            </Checkbox>\n            <CheckboxGroup options={data} value={checkedList} onChange={onChange}/>\n          </div>\n          <Info  isExact={DatasetData.isExact} data={DatasetData.datasets}/>\n          <div style={{marginTop:\"20px\"}}>\n            <Select\n            style={{minWidth: 170, margin: 5}}\n            options={[ {value: \"Narrow Search Space\"},\n                        {value: \"Initialization\"},\n                        {value: \"Pre-train\"},\n                        {value: \"Surrogate Model\"},\n                        {value: \"Acquisition Function\"},\n                        {value: \"Normalizer\"}\n                    ]}\n            onChange={handleSelectChange}\n            />\n            <Select\n            style={{minWidth: 90, margin:5}}\n            placeholder = \"Dataset Selector\"\n            options = {DatasetSelector.map(item => ({ value: item.name })).concat({ value: \"None\" })}\n            onChange={handleSelectorChange}\n            />\n            <Input style={{width: 400, margin:5}} placeholder=\"Parameters\" onChange={handleParameterChange}/>\n            <Button type=\"primary\" htmlType=\"submit\" style={{width:\"120px\", margin:5}} onClick={handleClick}>\n              Submit\n            </Button>\n            <Button danger style={{width:\"120px\", margin:5}} onClick={handleDelete}>\n              Delete\n            </Button>\n          </div>\n        </ConfigProvider>\n    )\n}\n\n\nfunction Info({isExact, data}) {\n  if (isExact) {\n    return (\n      <div style={{ overflowY: 'auto', maxHeight: '250px' }}>\n        <h4><strong>Information</strong></h4>\n          <ul>\n            <li><h6><span className=\"fw-semi-bold\">Name</span>: {data.name}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Dim</span>: {data.dim}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Obj</span>: {data.obj}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Fidelity</span>: {data.fidelity}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Workloads</span>: {data.workloads}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Budget type</span>: {data.budget_type}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Budget</span>: {data.budget}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Seeds</span>: {data.seeds}</h6></li>\n          </ul>\n          <h4 className=\"mt-5\"><strong>Algorithm</strong></h4>\n          <ul>\n            <li><h6><span className=\"fw-semi-bold\">Space refiner</span>: {data.SpaceRefiner}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Sampler</span>: {data.Sampler}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Pretrain</span>: {data.Pretrain}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Model</span>: {data.Model}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">ACF</span>: {data.ACF}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">DatasetSelector</span>: {data.DatasetSelector}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Normalizer</span>: {data.Normalizer}</h6></li>\n          </ul>\n          <h4 className=\"mt-5\"><strong>Auxiliary Data List</strong></h4>\n          <ul>\n            {data.metadata.map((dataset, index) => (\n              <li key={index}><h6>{dataset}</h6></li>\n            ))}\n          </ul>\n      </div>\n    )\n  } else {\n    return (\n      <></>\n    )\n  }\n}\n\nexport default SelectData"
  },
  {
    "path": "webui/src/features/experiment/components/SelectTask.js",
    "content": "import React, { useState } from \"react\";\nimport { PlusOutlined } from '@ant-design/icons';\nimport { Button, Form, Input, Select, Modal, Drawer, Table } from \"antd\";\n\nconst filterOption = (input, option) =>\n  (option?.value ?? '').toLowerCase().includes(input.toLowerCase());\n\nfunction TaskTable({ tasks }) {\n  return (\n    <Table\n      dataSource={tasks}\n      pagination={false}\n      rowKey=\"name\"\n      columns={[\n        { title: '#', dataIndex: 'index', key: 'index' },\n        { title: 'Task Name', dataIndex: 'name', key: 'name' },\n        { title: 'Variables', dataIndex: 'num_vars', key: 'num_vars' },\n        { title: 'Objectives', dataIndex: 'num_objs', key: 'num_objs' },\n        { title: 'Fidelity', dataIndex: 'fidelity', key: 'fidelity' },\n        { title: 'Workloads', dataIndex: 'workloads', key: 'workloads' },\n        { title: 'Budget Type', dataIndex: 'budget_type', key: 'budget_type' },\n        { title: 'Budget', dataIndex: 'budget', key: 'budget' }\n      ]}\n      locale={{\n        emptyText: 'No task'\n      }}\n    />\n    \n  );\n}\n\nfunction SelectTask({ data, updateTable }) {\n  const [drawerVisible, setDrawerVisible] = useState(false);\n  const [form] = Form.useForm(); // Form instance to manage form submission in the drawer\n  const [tasks, setTasks] = useState([]); // State to store tasks added from Drawer\n\n  const onFinish = (values) => {\n    const messageToSend = tasks.map(task => ({\n      name: task.name,\n      num_vars: parseInt(task.num_vars),\n      num_objs: task.num_objs,\n      fidelity: task.fidelity,\n      workloads: task.workloads,\n      budget_type: task.budget_type,\n      budget: task.budget,\n    }));\n    updateTable(messageToSend);\n    console.log('Request data:', messageToSend);\n    // Send request to backend...\n    fetch('http://localhost:5001/api/configuration/select_task', {\n      method: 'POST',\n      headers: {\n        'Content-Type': 'application/json',\n      },\n      body: JSON.stringify(messageToSend),\n    })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        }\n        return response.json();\n      })\n      .then(succeed => {\n        console.log('Message from back-end:', succeed);\n        Modal.success({\n          title: 'Information',\n          content: 'Submit successfully!'\n        });\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n        Modal.error({\n          title: 'Information',\n          content: 'Error: ' + error.message\n        });\n      });\n  };\n\n  const handleDrawerSubmit = () => {\n    form\n      .validateFields()\n      .then(values => {\n        console.log('Drawer form values:', values);\n\n        // Add the task to the task list\n        setTasks(prevTasks => [...prevTasks, values]);\n\n        form.resetFields(); // Reset the form fields after submission\n        setDrawerVisible(false); // Close the drawer\n      })\n      .catch(info => {\n        console.log('Validate Failed:', info);\n      });\n  };\n\n  return (\n    <>\n      <Form\n        name=\"main_form\"\n        onFinish={onFinish}\n        style={{ width: \"100%\" }}\n        autoComplete=\"off\"\n      >\n        <Form.List name=\"Tasks\">\n          {(fields, { add, remove }) => (\n            <>\n              <Form.Item\n                name={['Experiment name']}\n                style={{ marginBottom: '10px' }} // Add margin bottom\n              >\n                <Input\n                  placeholder=\"Experiment name\"\n                  style={{\n                    width: '300px', // Full width of the container\n                    fontSize: '32px', // Font size\n                    resize: 'vertical', // Allow vertical resizing only\n                  }}\n                />\n              </Form.Item>\n\n              <Form.Item\n                name={['experiment_description']}\n                style={{ marginBottom: '16px' }} // Add margin bottom\n              >\n                <Input.TextArea\n                  placeholder=\"Type the description of the experiment\"\n                  style={{\n                    width: '100%', // Full width of the container\n                    height: '200px', // Height of the text area\n                    fontSize: '16px', // Font size\n                    resize: 'vertical', // Allow vertical resizing only\n                  }}\n                />\n              </Form.Item>\n            </>\n          )}\n        </Form.List>\n        <Form.Item>\n          <div style={{ display: 'flex', justifyContent: 'space-between' }}>\n            <Button type=\"primary\" htmlType=\"submit\" style={{\n              width: \"150px\",\n              backgroundColor: 'rgb(53, 162, 235)',\n            }}>\n              Submit\n            </Button>\n\n            <Button onClick={() => setDrawerVisible(true)} icon={<PlusOutlined />} style={{\n              width: \"150px\",\n              borderColor: 'black',\n            }}>\n              Add new task\n            </Button>\n          </div>\n        </Form.Item>\n      </Form>\n\n      <Drawer\n        title=\"Add new task\"\n        placement=\"right\"\n        closable={true}\n        onClose={() => setDrawerVisible(false)}\n        visible={drawerVisible}\n        width={720}\n      >\n        <Form\n          form={form}\n          name=\"drawer_form\"\n          onFinish={handleDrawerSubmit}\n          style={{ width: \"100%\" }}\n          autoComplete=\"off\"\n        >\n          <Form.Item\n            name=\"name\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>Problem Name</span>}\n            rules={[{ required: true, message: 'Please select a problem name!' }]}\n          >\n            <Select\n              showSearch\n              placeholder=\"problem name\"\n              optionFilterProp=\"value\"\n              filterOption={filterOption}\n              style={{ fontSize: '14px', width: '300px' }}\n              options={data.map(item => ({ value: item.name }))}\n            />\n          </Form.Item>\n          <Form.Item\n            name=\"num_vars\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>Number of Variables</span>}\n            rules={[{ required: true, message: 'Please enter the number of variables!' }]}\n          >\n            <Input placeholder=\"number of variables\" style={{ fontSize: '14px', width: '300px' }}/>\n          </Form.Item>\n          <Form.Item\n            name=\"num_objs\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>Number of Objectives</span>}\n            rules={[{ required: true, message: 'Please select the number of objectives!' }]}\n          >\n            <Input placeholder=\"number of objectives\" style={{ fontSize: '14px', width: '300px' }}/>\n          </Form.Item>\n          <Form.Item\n            name=\"fidelity\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>Fidelity</span>}\n            rules={[{ required: false, message: 'Please select fidelity!' }]}\n          >\n            <Select\n              placeholder=\"fidelity\"\n              options={[]}\n              style={{ fontSize: '14px', width: '300px' }}\n            />\n          </Form.Item>\n          <Form.Item\n            name=\"workloads\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>Workloads</span>}\n            rules={[{ required: true, message: 'Please specify workloads!' }]}\n          >\n            <Input placeholder=\"specify workloads\" style={{ fontSize: '14px', width: '300px' }}/>\n          </Form.Item>\n          <Form.Item\n            name=\"budget_type\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>Budget Type</span>}\n            rules={[{ required: true, message: 'Please select budget type!' }]}\n          >\n            <Select\n              placeholder=\"budget type\"\n              style={{ fontSize: '14px', width: '200px' }}\n              options={[\n                { value: \"function evaluations\" },\n                { value: \"hours\" },\n                { value: \"minutes\" },\n                { value: \"seconds\" },\n              ]}\n            />\n          </Form.Item>\n          <Form.Item\n            name=\"budget\"\n            label={<span style={{ fontSize: '18px', fontWeight: 'bold' }}>Budget</span>}\n\n            rules={[{ required: true, message: 'Please enter the budget!' }]}\n          >\n            <Input placeholder=\"budget\" style={{ fontSize: '14px', width: '200px' }} />\n          </Form.Item>\n\n          <Form.Item>\n            <Button type=\"primary\" htmlType=\"submit\" style={{ width: \"150px\", backgroundColor: 'rgb(53, 162, 235)' }}>\n              Add\n            </Button>\n          </Form.Item>\n        </Form>\n      </Drawer>\n\n      <Form>\n        <Form.Item>\n          <TaskTable tasks={tasks} />\n        </Form.Item>\n      </Form>\n    </>\n  );\n}\n\nexport default SelectTask;\n"
  },
  {
    "path": "webui/src/features/experiment/index.js",
    "content": "import React from \"react\";\n\nimport TitleCard from \"../../components/Cards/TitleCard\"\n\n\nimport SelectTask from \"./components/SelectTask\";\nimport SelectAlgorithm from \"./components/SelectAlgorithm\";\nimport SearchData from \"./components/SearchData\";\nimport SelectData from \"./components/SelectData\";\n\nclass Experiment extends React.Component {\n  constructor(props) {\n    super(props);\n    this.state = {\n      TasksData: [],\n      tasks: [],\n      SpaceRefiner: [],\n      Sampler: [],\n      Pretrain: [],\n      Model: [],\n      ACF: [],\n      DataSelector: [],\n      Normalizer: [],\n      optimizer: {},\n\n      get_info: false,\n      DatasetData: {\"isExact\": false, \"datasets\": []},\n      SpaceRefinerDataSelector: \"\",\n      SpaceRefinerDataSelectorParameters: \"\",\n      SamplerDataSelector: \"\",\n      SamplerDataSelectorParameters: \"\",\n      PretrainDataSelector: \"\",\n      PretrainDataSelectorParameters: \"\",\n      ModelDataSelector: \"\",\n      ModelDataSelectorParameters: \"\",\n      ACFDataSelector: \"\",\n      ACFDataSelectorParameters: \"\",\n      NormalizerDataSelector: \"\",\n      NormalizerDataSelectorParameters: \"\",\n      DatasetSelector: [],\n    };\n  }\n\n  updateTaskTable = (newTasks) => {\n    this.setState({ tasks: newTasks });\n  }\n\n  updateOptTable = (newOptimizer) => {\n    this.setState({ optimizer: newOptimizer  });\n  }\n\n\n  updateDataTable = (newDatasets) => {\n    console.log(\"newDatasets\", newDatasets)\n    const { object, DatasetSelector, parameter, datasets } = newDatasets;\n    if (object === \"Narrow Search Space\") {\n      this.setState({ SpaceRefiner: datasets, SpaceRefinerDataSelector: DatasetSelector, SpaceRefinerDataSelectorParameters: parameter})\n    } else if (object === \"Initialization\") {\n      this.setState({ Sampler: datasets, SamplerDataSelector: DatasetSelector, SamplerDataSelectorParameters: parameter})\n    } else if (object === \"Pre-train\") {\n      this.setState({ Pretrain: datasets, PretrainDataSelector: DatasetSelector, PretrainDataSelectorParameters: parameter})\n    } else if (object === \"Surrogate Model\") {\n      this.setState({ Model: datasets, ModelDataSelector: DatasetSelector, ModelDataSelectorParameters: parameter})\n    } else if (object === \"Acquisition Function\") {\n      this.setState({ ACF: datasets, ACFDataSelector: DatasetSelector, ACFDataSelectorParameters: parameter})\n    } else if (object === \"Normalizer\") {\n      this.setState({ Normalizer: datasets, NormalizerDataSelector: DatasetSelector, NormalizerDataSelectorParameters: parameter})\n    }\n  }\n\n  set_dataset = (datasets) => {\n    console.log(datasets)\n    this.setState({ DatasetData: datasets })\n  }\n\n  render() {\n    if (this.state.TasksData.length === 0) {\n      const messageToSend = {\n        action: 'ask for basic information',\n      }\n      fetch('http://localhost:5001/api/configuration/basic_information', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(data => {\n        console.log('Message from back-end:', data);\n        this.setState({ TasksData: data.TasksData,\n                        tasks: data.tasks,\n                        SpaceRefiner: data.SpaceRefiner,\n                        Sampler: data.Sampler,\n                        Pretrain: data.Pretrain,\n                        Model: data.Model,\n                        ACF: data.ACF,\n                        DataSelector: data.DataSelector,\n                        Normalizer: data.Normalizer,\n                        get_info: true,  \n                        SpaceRefinerDataSelector: data.SpaceRefinerDataSelector,\n                        SpaceRefinerDataSelectorParameters: data.SpaceRefinerDataSelectorParameters,\n                        SamplerDataSelector: data.SamplerDataSelector,\n                        SamplerDataSelectorParameters: data.SamplerDataSelectorParameters,\n                        PretrainDataSelector: data.PretrainDataSelector,\n                        PretrainDataSelectorParameters: data.PretrainDataSelectorParameters,\n                        ModelDataSelector: data.ModelDataSelector,\n                        ModelDataSelectorParameters: data.ModelDataSelectorParameters,\n                        ACFDataSelector: data.ACFDataSelector,\n                        ACFDataSelectorParameters: data.ACFDataSelectorParameters,\n                        NormalizerDataSelector: data.NormalizerDataSelector,\n                        NormalizerDataSelectorParameters: data.NormalizerDataSelectorParameters,\n                      });\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n      });\n\n      fetch('http://localhost:5001/api/RunPage/get_info', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(data => {\n        console.log('Configuration infomation from back-end:', data);\n        this.setState({ tasks: data.tasks, \n                        optimizer: data.optimizer});\n                    })\n                    .catch((error) => {\n        console.error('Error sending message:', error);\n      });\n\n\n    } else {\n      return (\n        <div className=\"grid mt-4 w-[1200px] h-[800px] gap-6\">\n\n            <TitleCard >\n                      <SelectTask data={this.state.TasksData} updateTable={this.updateTaskTable}/>\n            </TitleCard>\n\n\n\n            <TitleCard\n              title={\n                <h5>\n                  <span className=\"fw-semi-bold\">Build Algorithm</span>\n                </h5>\n              }\n              collapse\n            >\n              <SelectAlgorithm SpaceRefiner={this.state.SpaceRefiner}\n                                    Sampler={this.state.Sampler}\n                                    Pretrain={this.state.Pretrain}\n                                    Model={this.state.Model}\n                                    ACF={this.state.ACF}\n                                    DataSelector={this.state.DataSelector}\n                                    Normalizer={this.state.Normalizer}\n                                    updateTable={this.updateOptTable} />\n            </TitleCard>        \n\n\n\n            <TitleCard\n              title={\n                <h5>\n                  <span className=\"fw-semi-bold\">Customize Auxiliary Data</span>\n                </h5>\n              }\n              collapse\n            >\n                  <SearchData set_dataset={this.set_dataset}/>\n                  <p>\n                    Choose the datasets you want to use in the experiment.\n                  </p>\n                  <SelectData DatasetData={this.state.DatasetData} updateTable={this.updateDataTable} DatasetSelector={this.state.DatasetSelector}/>\n            </TitleCard>\n            \n        </div>\n      );\n    }\n  }\n}\n\nexport default Experiment;"
  },
  {
    "path": "webui/src/features/integration/index.js",
    "content": "import { useState } from \"react\"\nimport { useDispatch } from \"react-redux\"\nimport TitleCard from \"../../components/Cards/TitleCard\"\nimport { showNotification } from \"../common/headerSlice\"\n\n\nconst INITIAL_INTEGRATION_LIST = [\n    {name : \"Slack\", icon : \"https://cdn-icons-png.flaticon.com/512/2111/2111615.png\", isActive : true, description : \"Slack is an instant messaging program designed by Slack Technologies and owned by Salesforce.\"},\n    {name : \"Facebook\", icon : \"https://cdn-icons-png.flaticon.com/512/124/124010.png\", isActive : false, description : \"Meta Platforms, Inc., doing business as Meta and formerly named Facebook, Inc., and TheFacebook.\"},\n    {name : \"Linkedin\", icon : \"https://cdn-icons-png.flaticon.com/512/174/174857.png\", isActive : true, description : \"LinkedIn is a business and employment-focused social media platform that works through websites and mobile apps.\"},\n    {name : \"Google Ads\", icon : \"https://cdn-icons-png.flaticon.com/512/2301/2301145.png\", isActive : false, description : \"Google Ads is an online advertising platform developed by Google, where advertisers bid to display brief advertisements, service offerings\"},\n    {name : \"Gmail\", icon : \"https://cdn-icons-png.flaticon.com/512/5968/5968534.png\", isActive : false, description : \"Gmail is a free email service provided by Google. As of 2019, it had 1.5 billion active users worldwide.\"},\n    {name : \"Salesforce\", icon : \"https://cdn-icons-png.flaticon.com/512/5968/5968880.png\", isActive : false, description : \"It provides customer relationship management software and applications focused on sales, customer service, marketing automation.\"},\n    {name : \"Hubspot\", icon : \"https://cdn-icons-png.flaticon.com/512/5968/5968872.png\", isActive : false, description : \"American developer and marketer of software products for inbound marketing, sales, and customer service.\"},\n]\n\nfunction Integration(){\n\n    const dispatch = useDispatch()\n\n    const [integrationList, setIntegrationList] = useState(INITIAL_INTEGRATION_LIST)\n\n\n    const updateIntegrationStatus = (index) => {\n        let integration = integrationList[index]\n        setIntegrationList(integrationList.map((i, k) => {\n            if(k===index)return {...i, isActive : !i.isActive}\n            return i\n        }))\n        dispatch(showNotification({message : `${integration.name} ${integration.isActive ? \"disabled\" : \"enabled\"}` , status : 1}))\n    }\n\n\n    return(\n        <>\n            <div className=\"grid grid-cols-1 md:grid-cols-3 gap-6\">\n            {\n                integrationList.map((i, k) => {\n                    return(\n                        <TitleCard key={k} title={i.name} topMargin={\"mt-2\"}>\n                            \n                            <p className=\"flex\">\n                                <img alt=\"icon\" src={i.icon} className=\"w-12 h-12 inline-block mr-4\" />\n                                {i.description}\n                            </p>\n                            <div className=\"mt-6 text-right\">\n                                <input type=\"checkbox\" className=\"toggle toggle-success toggle-lg\" checked={i.isActive} onChange={() => updateIntegrationStatus(k)}/>\n                            </div>\n                            \n                        </TitleCard>\n                    )\n                \n                })\n            }\n            </div>\n        </>\n    )\n}\n\nexport default Integration"
  },
  {
    "path": "webui/src/features/leads/components/AddLeadModalBody.js",
    "content": "import { useState } from \"react\"\nimport { useDispatch } from \"react-redux\"\nimport InputText from '../../../components/Input/InputText'\nimport ErrorText from '../../../components/Typography/ErrorText'\nimport { showNotification } from \"../../common/headerSlice\"\nimport { addNewLead } from \"../leadSlice\"\n\nconst INITIAL_LEAD_OBJ = {\n    first_name : \"\",\n    last_name : \"\",\n    email : \"\"\n}\n\nfunction AddLeadModalBody({closeModal}){\n    const dispatch = useDispatch()\n    const [loading, setLoading] = useState(false)\n    const [errorMessage, setErrorMessage] = useState(\"\")\n    const [leadObj, setLeadObj] = useState(INITIAL_LEAD_OBJ)\n\n\n    const saveNewLead = () => {\n        if(leadObj.first_name.trim() === \"\")return setErrorMessage(\"First Name is required!\")\n        else if(leadObj.email.trim() === \"\")return setErrorMessage(\"Email id is required!\")\n        else{\n            let newLeadObj = {\n                \"id\": 7,\n                \"email\": leadObj.email,\n                \"first_name\": leadObj.first_name,\n                \"last_name\": leadObj.last_name,\n                \"avatar\": \"https://reqres.in/img/faces/1-image.jpg\"\n            }\n            dispatch(addNewLead({newLeadObj}))\n            dispatch(showNotification({message : \"New Lead Added!\", status : 1}))\n            closeModal()\n        }\n    }\n\n    const updateFormValue = ({updateType, value}) => {\n        setErrorMessage(\"\")\n        setLeadObj({...leadObj, [updateType] : value})\n    }\n\n    return(\n        <>\n\n            <InputText type=\"text\" defaultValue={leadObj.first_name} updateType=\"first_name\" containerStyle=\"mt-4\" labelTitle=\"First Name\" updateFormValue={updateFormValue}/>\n\n            <InputText type=\"text\" defaultValue={leadObj.last_name} updateType=\"last_name\" containerStyle=\"mt-4\" labelTitle=\"Last Name\" updateFormValue={updateFormValue}/>\n\n            <InputText type=\"email\" defaultValue={leadObj.email} updateType=\"email\" containerStyle=\"mt-4\" labelTitle=\"Email Id\" updateFormValue={updateFormValue}/>\n\n\n            <ErrorText styleClass=\"mt-16\">{errorMessage}</ErrorText>\n            <div className=\"modal-action\">\n                <button  className=\"btn btn-ghost\" onClick={() => closeModal()}>Cancel</button>\n                <button  className=\"btn btn-primary px-6\" onClick={() => saveNewLead()}>Save</button>\n            </div>\n        </>\n    )\n}\n\nexport default AddLeadModalBody"
  },
  {
    "path": "webui/src/features/leads/index.js",
    "content": "import moment from \"moment\"\nimport { useEffect } from \"react\"\nimport { useDispatch, useSelector } from \"react-redux\"\nimport TitleCard from \"../../components/Cards/TitleCard\"\nimport { openModal } from \"../common/modalSlice\"\nimport { deleteLead, getLeadsContent } from \"./leadSlice\"\nimport { CONFIRMATION_MODAL_CLOSE_TYPES, MODAL_BODY_TYPES } from '../../utils/globalConstantUtil'\nimport TrashIcon from '@heroicons/react/24/outline/TrashIcon'\nimport { showNotification } from '../common/headerSlice'\n\nconst TopSideButtons = () => {\n\n    const dispatch = useDispatch()\n\n    const openAddNewLeadModal = () => {\n        dispatch(openModal({title : \"Add New Lead\", bodyType : MODAL_BODY_TYPES.LEAD_ADD_NEW}))\n    }\n\n    return(\n        <div className=\"inline-block float-right\">\n            <button className=\"btn px-6 btn-sm normal-case btn-primary\" onClick={() => openAddNewLeadModal()}>Add New</button>\n        </div>\n    )\n}\n\nfunction Leads(){\n\n    const {leads } = useSelector(state => state.lead)\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(getLeadsContent())\n    }, [])\n\n    \n\n    const getDummyStatus = (index) => {\n        if(index % 5 === 0)return <div className=\"badge\">Not Interested</div>\n        else if(index % 5 === 1)return <div className=\"badge badge-primary\">In Progress</div>\n        else if(index % 5 === 2)return <div className=\"badge badge-secondary\">Sold</div>\n        else if(index % 5 === 3)return <div className=\"badge badge-accent\">Need Followup</div>\n        else return <div className=\"badge badge-ghost\">Open</div>\n    }\n\n    const deleteCurrentLead = (index) => {\n        dispatch(openModal({title : \"Confirmation\", bodyType : MODAL_BODY_TYPES.CONFIRMATION, \n        extraObject : { message : `Are you sure you want to delete this lead?`, type : CONFIRMATION_MODAL_CLOSE_TYPES.LEAD_DELETE, index}}))\n    }\n\n    return(\n        <>\n            \n            <TitleCard title=\"Current Leads\" topMargin=\"mt-2\" TopSideButtons={<TopSideButtons />}>\n\n                {/* Leads List in table format loaded from slice after api call */}\n            <div className=\"overflow-x-auto w-full\">\n                <table className=\"table w-full\">\n                    <thead>\n                    <tr>\n                        <th>Name</th>\n                        <th>Email Id</th>\n                        <th>Created At</th>\n                        <th>Status</th>\n                        <th>Assigned To</th>\n                        <th></th>\n                    </tr>\n                    </thead>\n                    <tbody>\n                        {\n                            leads.map((l, k) => {\n                                return(\n                                    <tr key={k}>\n                                    <td>\n                                        <div className=\"flex items-center space-x-3\">\n                                            <div className=\"avatar\">\n                                                <div className=\"mask mask-squircle w-12 h-12\">\n                                                    <img src={l.avatar} alt=\"Avatar\" />\n                                                </div>\n                                            </div>\n                                            <div>\n                                                <div className=\"font-bold\">{l.first_name}</div>\n                                                <div className=\"text-sm opacity-50\">{l.last_name}</div>\n                                            </div>\n                                        </div>\n                                    </td>\n                                    <td>{l.email}</td>\n                                    <td>{moment(new Date()).add(-5*(k+2), 'days').format(\"DD MMM YY\")}</td>\n                                    <td>{getDummyStatus(k)}</td>\n                                    <td>{l.last_name}</td>\n                                    <td><button className=\"btn btn-square btn-ghost\" onClick={() => deleteCurrentLead(k)}><TrashIcon className=\"w-5\"/></button></td>\n                                    </tr>\n                                )\n                            })\n                        }\n                    </tbody>\n                </table>\n            </div>\n            </TitleCard>\n        </>\n    )\n}\n\n\nexport default Leads"
  },
  {
    "path": "webui/src/features/leads/leadSlice.js",
    "content": "import { createSlice, createAsyncThunk } from '@reduxjs/toolkit'\nimport axios from 'axios'\n\n\n\nexport const getLeadsContent = createAsyncThunk('/leads/content', async () => {\n\tconst response = await axios.get('/api/users?page=2', {})\n\treturn response.data;\n})\n\nexport const leadsSlice = createSlice({\n    name: 'leads',\n    initialState: {\n        isLoading: false,\n        leads : []\n    },\n    reducers: {\n\n\n        addNewLead: (state, action) => {\n            let {newLeadObj} = action.payload\n            state.leads = [...state.leads, newLeadObj]\n        },\n\n        deleteLead: (state, action) => {\n            let {index} = action.payload\n            state.leads.splice(index, 1)\n        }\n    },\n\n    extraReducers: {\n\t\t[getLeadsContent.pending]: state => {\n\t\t\tstate.isLoading = true\n\t\t},\n\t\t[getLeadsContent.fulfilled]: (state, action) => {\n\t\t\tstate.leads = action.payload.data\n\t\t\tstate.isLoading = false\n\t\t},\n\t\t[getLeadsContent.rejected]: state => {\n\t\t\tstate.isLoading = false\n\t\t},\n    }\n})\n\nexport const { addNewLead, deleteLead } = leadsSlice.actions\n\nexport default leadsSlice.reducer"
  },
  {
    "path": "webui/src/features/run/components/DataTable.js",
    "content": "import React from \"react\";\nimport {\n  Table,\n} from \"reactstrap\";\n\nfunction DataTable({datasets, optimizer}) {\n    // console.log(\"datasets\",datasets);\n    return (\n        <Table lg={12} md={12} sm={12} striped>\n            <thead>\n                <tr className=\"fs-sm\">\n                <th className=\"hidden-sm-down\">#</th>\n                <th className=\"hidden-sm-down\">DataSelector</th>\n                <th className=\"hidden-sm-down\">Parameters</th>\n                <th className=\"hidden-sm-down\">Datasets</th>\n                </tr>\n            </thead>\n            <tbody>\n                <tr key=\"SpaceRefiner\">\n                    <td>Narrow Search Space</td>\n                    <td>{optimizer.SpaceRefinerDataSelector}</td>\n                    <td>{optimizer.SpaceRefinerDataSelectorParameters}</td>\n                    <td>{datasets.SpaceRefiner.join(', ')}</td>\n                </tr>\n                <tr key=\"Sampler\">\n                    <td>Initialization</td>\n                    <td>{optimizer.SamplerDataSelector}</td>\n                    <td>{optimizer.SamplerDataSelectorParameters}</td>\n                    <td>{datasets.Sampler.join(', ')}</td>\n                </tr>\n                <tr key=\"Pretrain\">\n                    <td>Pre-train</td>\n                    <td>{optimizer.PretrainDataSelector}</td>\n                    <td>{optimizer.PretrainDataSelectorParameters}</td>\n                    <td>{datasets.Pretrain.join(', ')}</td>\n                </tr>\n                <tr key=\"Model\">\n                    <td>Surrogate Model</td>\n                    <td>{optimizer.ModelDataSelector}</td>\n                    <td>{optimizer.ModelDataSelectorParameters}</td>\n                    <td>{datasets.Model.join(', ')}</td>\n                </tr>\n                <tr key=\"ACF\">\n                    <td>Acquisition Function</td>\n                    <td>{optimizer.ACFDataSelector}</td>\n                    <td>{optimizer.ACFDataSelectorParameters}</td>\n                    <td>{datasets.ACF.join(', ')}</td>\n                </tr>\n                <tr key=\"Normalizer\">\n                    <td>Normalizer</td>\n                    <td>{optimizer.NormalizerDataSelector}</td>\n                    <td>{optimizer.NormalizerDataSelectorParameters}</td>\n                    <td>{datasets.Normalizer.join(', ')}</td>\n                </tr>\n            </tbody>\n        </Table>\n    );\n}\n\nexport default DataTable;"
  },
  {
    "path": "webui/src/features/run/components/OptTable.js",
    "content": "import React from \"react\";\nimport {\n  Table,\n} from \"reactstrap\";\n\nfunction OptTable({optimizer}) {\n    // console.log(\"optimizer\",optimizer);\n    return (\n        <Table lg={12} md={12} sm={12} striped>\n            <thead>\n                <tr className=\"fs-sm\">\n                <th className=\"hidden-sm-down\">#</th>\n                <th className=\"hidden-sm-down\">Narrow Search Space</th>\n                <th className=\"hidden-sm-down\">Initialization</th>\n                <th className=\"hidden-sm-down\">Pre-train</th>\n                <th className=\"hidden-sm-down\">Surrogate Model</th>\n                <th className=\"hidden-sm-down\">Acquisition Function</th>\n                <th className=\"hidden-sm-down\">Normalizer</th>\n                </tr>\n            </thead>\n            <tbody>\n                <tr key=\"Name\">\n                    <td>Name</td>\n                    <td>{optimizer.SpaceRefiner}</td>\n                    <td>{optimizer.Sampler}</td>\n                    <td>{optimizer.Pretrain}</td>\n                    <td>{optimizer.Model}</td>\n                    <td>{optimizer.ACF}</td>\n                    <td>{optimizer.Normalizer}</td>\n                </tr>\n                <tr key=\"Parameters\">\n                    <td>Parameters</td>\n                    <td>{optimizer.SpaceRefinerParameters}</td>\n                    <td>InitNum:{optimizer.SamplerInitNum},{optimizer.SamplerParameters}</td>\n                    <td>{optimizer.PretrainParameters}</td>\n                    <td>{optimizer.ModelParameters}</td>\n                    <td>{optimizer.ACFParameters}</td>\n                    <td>{optimizer.NormalizerParameters}</td>\n                </tr>\n            </tbody>\n        </Table>\n    );\n}\n\nexport default OptTable;"
  },
  {
    "path": "webui/src/features/run/components/Run.js",
    "content": "import React, { useState } from \"react\";\n\nimport {\n    Button,\n    Form,\n    Input,\n    Space, \n    Select,\n    Modal,\n    ConfigProvider\n} from \"antd\";\n\nfunction Run() {\n    const [form] = Form.useForm()\n\n    const onFinish = (values) => {\n        // 构造要发送到后端的数据\n        const messageToSend = values\n        console.log('Request data:', messageToSend);\n        // 向后端发送请求...\n        fetch('http://localhost:5001/api/configuration/run', {\n            method: 'POST',\n            headers: {\n              'Content-Type': 'application/json',\n            },\n            body: JSON.stringify(messageToSend),\n          })\n          .then(response => {\n            if (!response.ok) {\n              throw new Error('Network response was not ok');\n            } \n            return response.json();\n          })\n          .then(isSucceed => {\n            console.log('Message from back-end:', isSucceed);\n            Modal.success({\n              title: 'Information',\n              content: 'Run Successfully!'\n            })\n          })\n          .catch((error) => {\n            console.error('Error sending message:', error);\n            var errorMessage = error.error;\n            Modal.error({\n              title: 'Information',\n              content: 'Error:' + errorMessage\n            })\n          });\n      };\n\n    return (\n        <ConfigProvider\n          theme={{\n            components: {\n              Input: {\n                addonBg:\"black\"\n              },\n            },\n          }}  \n        >\n        <Form\n            form={form}\n            name=\"dynamic_form_nest_item\"\n            onFinish={onFinish}\n            style={{ width:\"100%\" }}\n            autoComplete=\"off\"\n        >\n            <div style={{ overflowY: 'auto', maxHeight: '150px' }}>\n                <div style={{ display: 'flex', alignItems: 'baseline' }}>\n                    <h6 style={{color:\"black\"}}>Seeds</h6>\n                    <Form.Item name=\"Seeds\" style={{marginRight:10, marginLeft:10}}>\n                        <Input />\n                    </Form.Item>\n                    <h6 style={{color:\"black\"}}>Remote</h6>\n                    <Form.Item name=\"Remote\" style={{marginRight:10, marginLeft:10}}>\n                      <Select\n                      options={[ {value: \"True\"},\n                                  {value: \"False\"},\n                              ]}\n                      />\n                    </Form.Item>\n                    <h6 style={{color:\"black\"}}>ServerURL</h6>\n                    <Form.Item name=\"ServerURL\" style={{marginLeft:10}}>\n                        <Input />\n                    </Form.Item>\n                </div>\n            </div>\n            <Form.Item>\n            <Button type=\"primary\" htmlType=\"submit\" style={{width:\"120px\"}}>\n                Run\n            </Button>\n            </Form.Item>\n        </Form>\n      </ConfigProvider>\n    );\n}\n\nexport default Run;"
  },
  {
    "path": "webui/src/features/run/components/RunProgress.js",
    "content": "import React, { useState } from \"react\";\nimport { MinusCircleOutlined } from '@ant-design/icons'\nimport {\n    Progress,\n    ConfigProvider\n} from \"antd\";\n\n\nclass RunProgress extends React.Component {\n    constructor(props) {\n        super(props);\n        this.state = {\n            twoColors: {\n                '0%': '#108ee9',\n                '100%': '#87d068',\n            },\n            data: []\n        }\n    }\n    // 与后端交互，获取任务进度\n    componentDidMount() {\n        // 开始定时调用 fetchData 函数\n        this.intervalId = setInterval(this.fetchData, 1000);\n      }\n    \n      componentWillUnmount() {\n        // 清除定时器，以防止内存泄漏\n        clearInterval(this.intervalId);\n      }\n    \n      fetchData = async () => {\n        try {\n          const messageToSend = {\n            message:\"ask for progress\"\n          }\n          const response = await fetch('http://localhost:5001/api/configuration/run_progress', {\n            method: 'POST',\n            headers: {\n              'Content-Type': 'application/json',\n            },\n            body: JSON.stringify(messageToSend)\n          });\n          if (!response.ok) {\n            throw new Error('Network response was not ok');\n          }\n          const data = await response.json();\n          console.log('Progress:', data);\n          // 在这里处理从服务器获取的数据\n          this.setState({\n            data: data\n          })\n          // console.log('State:', this.state.BarData)\n        } catch (error) {\n          console.error('Error fetching data:', error);\n        }\n      };\n\n      handleClick = (task_name) => {\n        const messageToSend = {\n          name: task_name\n        }\n        fetch('http://localhost:5001/api/configuration/stop_progress', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(succeed => {\n        console.log('Message from back-end:', succeed);\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n      });\n      }\n\n      render() {\n        return (\n            <ConfigProvider\n            theme={{\n                token:{\n                    colorText: \"#696969\"\n                },\n                components: {\n                Progress: {\n                    remainingColor: \"#696969\"\n                },\n                },\n            }}\n            >\n                <div style={{ overflowY: 'auto', maxHeight: '200px', maxWidth: '100%' }}>\n                    {this.state.data.map((task, index) => (\n                        <div key={index} style={{ marginBottom: 10 }}>\n                            <h6>{task.name}</h6>\n                            <Progress percent={task.progress} status=\"active\" type=\"line\" strokeColor={this.state.twoColors} style={{ width:\"93%\", marginRight:25}} />\n                            <MinusCircleOutlined style={{color: 'white'}} onClick={()=>this.handleClick(task.name)} />\n                        </div>\n                    ))}\n                </div>\n            </ConfigProvider>\n        )\n      }\n}\n\n\n\nexport default RunProgress;"
  },
  {
    "path": "webui/src/features/run/components/TaskTable.js",
    "content": "import React from \"react\";\nimport {\n  Table,\n} from \"reactstrap\";\n\nfunction TaskTable({tasks}) {\n    // console.log(tasks);\n    return (\n        <Table lg={12} md={12} sm={12} striped>\n            <thead>\n                <tr className=\"fs-sm\">\n                <th className=\"hidden-sm-down\">#</th>\n                <th className=\"hidden-sm-down\">Name</th>\n                <th className=\"hidden-sm-down\">Num_vars</th>\n                <th className=\"hidden-sm-down\">Num_objs</th>\n                <th className=\"hidden-sm-down\">Fidelity</th>\n                <th className=\"hidden-sm-down\">workloads</th>\n                <th className=\"hidden-sm-down\">budget_type</th>\n                <th className=\"hidden-sm-down\">budget</th>\n                </tr>\n            </thead>\n            <tbody>\n                {tasks.map((task, index) => (\n                    <tr key={index}>\n                        <td>{index+1}</td>\n                        <td>{task.name}</td>\n                        <td>{task.num_vars}</td>\n                        <td>{task.num_objs}</td>\n                        <td>{task.fidelity}</td>\n                        <td>{task.workloads}</td>\n                        <td>{task.budget_type}</td>\n                        <td>{task.budget}</td>\n                    </tr>\n                ))}\n            </tbody>\n        </Table>\n    );\n}\n\nexport default TaskTable;"
  },
  {
    "path": "webui/src/features/run/index.js",
    "content": "import React from \"react\";\n\nimport { Row, Col } from \"reactstrap\";\n\nimport TitleCard from \"../../components/Cards/TitleCard\"\n\nimport Run from \"./components/Run\"\nimport RunProgress from \"./components/RunProgress\"\nimport TaskTable from \"./components/TaskTable\";\nimport OptTable from \"./components/OptTable\";\nimport DataTable from \"./components/DataTable\";\n\n\nclass RunPage extends React.Component {\n  constructor(props) {\n    super(props);\n    this.state = {\n      get_info: false,\n      tasks: [],\n      optimizer: {},\n      datasets: {},\n    };\n  }\n\n  render() { \n    // If first time rendering, then render the default task\n    // If not, then render the task that was clicked\n    if (this.state.get_info === false) {\n      // TODO: ask for task list from back-end\n      const messageToSend = {\n        action: 'ask for information',\n      }\n      fetch('http://localhost:5001/api/RunPage/get_info', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(data => {\n        console.log('Configuration infomation from back-end:', data);\n        this.setState({ get_info: true,  \n                        tasks: data.tasks,\n                        optimizer: data.optimizer,\n                        datasets: data.datasets});\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n      });\n\n      \n      // Set the default task as the first task in the list\n      return (\n        <div>\n          <h1 className=\"page-title\">\n            <span className=\"fw-semi-bold\">Run</span>\n          </h1>\n        </div>\n      )\n    } else {\n\n      return (\n        <div>\n          <div>\n            <Row>\n              <Col lg={12} xs={12}>\n                <TitleCard\n                  title={\n                    <h5>\n                      <span className=\"fw-semi-bold\">Experiment Set-up</span>\n                    </h5>\n                  }\n                  collapse\n                >\n                  <h4>\n                    Problems\n                  </h4>\n                  <TaskTable tasks={this.state.tasks} />\n                  <h4>\n                    Algorithm\n                  </h4>\n                  <OptTable optimizer={this.state.optimizer} />\n                  <h4>\n                    Data\n                  </h4>\n                  <DataTable datasets={this.state.datasets} optimizer={this.state.optimizer}/>\n                  <Run />\n                  <RunProgress />\n                </TitleCard>\n              </Col>\n            </Row>\n          </div>\n        </div>\n      );\n    }\n  }\n\n}\n\nexport default RunPage;\n"
  },
  {
    "path": "webui/src/features/seldata/components/DataTable.js",
    "content": "import React from \"react\";\nimport {\n  Table,\n} from \"reactstrap\";\n\nfunction DataTable({ SpaceRefiner, SpaceRefinerDataSelector, SpaceRefinerDataSelectorParameters,\n    Sampler, SamplerDataSelector, SamplerDataSelectorParameters,\n    Pretrain, PretrainDataSelector, PretrainDataSelectorParameters,\n    Model, ModelDataSelector, ModelDataSelectorParameters,\n    ACF, ACFDataSelector, ACFDataSelectorParameters,\n    Normalizer, NormalizerDataSelector, NormalizerDataSelectorParameters,\n}) {\n    console.log(\"SpaceRefiner\",SpaceRefiner);\n    return (\n        <Table lg={12} md={12} sm={12} striped>\n            <thead>\n                <tr className=\"fs-sm\">\n                <th className=\"hidden-sm-down\">#</th>\n                <th className=\"hidden-sm-down\">DataSelector</th>\n                <th className=\"hidden-sm-down\">Parameters</th>\n                <th className=\"hidden-sm-down\">Datasets</th>\n                </tr>\n            </thead>\n            <tbody>\n                <tr key=\"SpaceRefiner\">\n                    <td>Narrow Search Space</td>\n                    <td>{SpaceRefinerDataSelector}</td>\n                    <td>{SpaceRefinerDataSelectorParameters}</td>\n                    <td>{SpaceRefiner.join(', ')}</td>\n                </tr>\n                <tr key=\"Sampler\">\n                    <td>Initialization</td>\n                    <td>{SamplerDataSelector}</td>\n                    <td>{SamplerDataSelectorParameters}</td>\n                    <td>{Sampler.join(', ')}</td>\n                </tr>\n                <tr key=\"Pretrain\">\n                    <td>Pre-train</td>\n                    <td>{PretrainDataSelector}</td>\n                    <td>{PretrainDataSelectorParameters}</td>\n                    <td>{Pretrain.join(', ')}</td>\n                </tr>\n                <tr key=\"Model\">\n                    <td>Surrogate Model</td>\n                    <td>{ModelDataSelector}</td>\n                    <td>{ModelDataSelectorParameters}</td>\n                    <td>{Model.join(', ')}</td>\n                </tr>\n                <tr key=\"ACF\">\n                    <td>Acquisition Function</td>\n                    <td>{ACFDataSelector}</td>\n                    <td>{ACFDataSelectorParameters}</td>\n                    <td>{ACF.join(', ')}</td>\n                </tr>\n                <tr key=\"Normalizer\">\n                    <td>Normalizer</td>\n                    <td>{NormalizerDataSelector}</td>\n                    <td>{NormalizerDataSelectorParameters}</td>\n                    <td>{Normalizer.join(', ')}</td>\n                </tr>\n            </tbody>\n        </Table>\n    );\n}\n\nexport default DataTable;"
  },
  {
    "path": "webui/src/features/seldata/components/SearchData.js",
    "content": "import React, {useState} from \"react\";\n\nimport {\n    Row,\n    Col,\n    Button,\n    InputNumber,\n    Slider,\n    Space,\n    Input,\n    Form,\n    ConfigProvider,\n    Select,\n    Modal,\n} from \"antd\";\n\n\nfunction SearchData({set_dataset}) {\n  const [form] = Form.useForm()\n\n  const onFinish = (values) => {\n    const messageToSend = values;\n    console.log('Request data:', messageToSend);\n    // 向后端发送请求...\n    fetch('http://localhost:5001/api/configuration/search_dataset', {\n      method: 'POST',\n      headers: {\n        'Content-Type': 'application/json',\n      },\n      body: JSON.stringify(messageToSend), \n    })\n    .then(response => {\n      if (!response.ok) {\n        throw new Error('Network response was not ok');\n      } \n      return response.json();\n    })\n    .then(message => {\n      console.log('Message from back-end:', message);\n        set_dataset(message)\n      }\n    )\n    .catch((error) => {\n      console.error('Error sending message:', error);\n      var errorMessage = error.error;\n      Modal.error({\n        title: 'Information',\n        content: 'Error:' + errorMessage\n      })\n    });\n  }\n\n  return(\n    <ConfigProvider\n      theme={{\n        components: {\n          Input: {\n            addonBg:\"white\"\n          },\n        },\n      }}  \n    >\n    <Form\n      name=\"SearchData\"\n      form={form}\n      onFinish={onFinish}\n      style={{width:\"100%\"}}\n      autoComplete=\"off\"\n    >\n      <Space className=\"space\" style={{ display: 'flex'}} align=\"baseline\">\n        <Form.Item\n          name=\"task_name\"\n          style={{flexGrow: 1}}\n        >\n          <Input addonBefore={\"Dataset Name\"}/>\n        </Form.Item>\n        <Form.Item\n          name=\"num_variables\"\n          style={{flexGrow: 1}}\n        >\n          <Input addonBefore={\"Num of Variables\"}/>\n        </Form.Item>\n      </Space>\n      <Space className=\"space\" style={{ display: 'flex'}} align=\"baseline\">\n        <Form.Item\n          name=\"variables_name\"\n          style={{flexGrow: 1}}\n        >\n          <Input addonBefore={\"Variable Name\"}/>\n        </Form.Item>\n        <Form.Item\n          name=\"num_objectives\"\n          style={{flexGrow: 1}}\n        >\n          <Input addonBefore={\"Num of Objectives\"}/>\n        </Form.Item>\n      </Space>\n      <h6 style={{color:\"black\"}}>Search method:</h6>\n      <Space className=\"space\" style={{ display: 'flex'}} align=\"baseline\">\n      <Form.Item\n        name=\"search_method\"\n      >\n        <Select style={{minWidth: 150}}\n          options={[ {value: \"Hash\"},\n                      {value: \"Fuzzy\"},\n                      {value: \"LSH\"},\n                  ]}\n        />\n      </Form.Item>\n      <Form.Item>\n        <Button type=\"primary\" htmlType=\"submit\" style={{width:\"120px\"}}>\n          Search\n        </Button>\n      </Form.Item>\n      </Space>\n    </Form>\n    </ConfigProvider>\n  )\n}\n\nexport default SearchData"
  },
  {
    "path": "webui/src/features/seldata/components/SelectData.css",
    "content": ""
  },
  {
    "path": "webui/src/features/seldata/components/SelectData.js",
    "content": "import React, {useState} from \"react\";\n\nimport {\n    Button,\n    Checkbox,\n    ConfigProvider,\n    Modal,\n    Select,\n    Input,\n} from \"antd\";\n\nconst CheckboxGroup = Checkbox.Group;\n\nfunction SelectData({DatasetData, updateTable, DatasetSelector}) {\n    var data = []\n    if (DatasetData.isExact) {\n      data = [DatasetData.datasets.name]\n    } else {\n      data = DatasetData.datasets\n    }\n    const [checkedList, setCheckedList] = useState([]);\n    const [selectedOption, setSelectedOption] = useState();\n    const [selector, setSelector] = useState(\"None\");\n    const [parameter, setParameter] = useState(\"\");\n    const checkAll = data.length === checkedList.length;\n    const indeterminate = checkedList.length > 0 && checkedList.length < data.length;\n    const onChange = (list) => {\n        setCheckedList(list);\n    };\n    const onCheckAllChange = (e) => {\n        setCheckedList(e.target.checked ? data : []);\n    };\n    const handleSelectChange = (value) => {\n      setSelectedOption(value); // 当选择发生变化时更新选项\n    };\n    const handleSelectorChange = (value) => {\n      setSelector(value); \n    };\n    const handleParameterChange = (event) => {\n      setParameter(event.target.value);\n    };\n    const handleClick = () => {\n      const datasetList = checkedList.map(item => {\n        return item;\n      });\n      const messageToSend = {\n        object: selectedOption,\n        DatasetSelector: selector,\n        parameter: parameter,\n        datasets: datasetList,\n      }\n      updateTable(messageToSend)\n      console.log(messageToSend)\n      fetch('http://localhost:5001/api/configuration/dataset', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(succeed => {\n        console.log('Message from back-end:', succeed);\n        Modal.success({\n          title: 'Information',\n          content: 'Submit successfully!'\n        })\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n        var errorMessage = error.error;\n        Modal.error({\n          title: 'Information',\n          content: 'Error:' + errorMessage\n        })\n      });\n    }\n\n    const handleDelete = () => {\n      const datasetList = checkedList.map(item => {\n        return item;\n      });\n      const messageToSend = {\n        datasets: datasetList,\n      }\n      console.log(messageToSend)\n      fetch('http://localhost:5001/api/configuration/delete_dataset', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(succeed => {\n        console.log('Message from back-end:', succeed);\n        datasetList.forEach(item => {\n          let index = data.indexOf(item);\n          if (index !== -1) {\n            data.splice(index, 1);\n          }\n        });\n        var newDataset = {\"isExact\": false, \"datasets\": data}\n        console.log(\"new dataset:\", newDataset)\n        // set_dataset(newDataset)\n        Modal.success({\n          title: 'Information',\n          content: 'Delete successfully!'\n        })\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n        var errorMessage = error.error;\n        Modal.error({\n          title: 'Information',\n          content: 'Error:' + errorMessage\n        })\n      });\n    }\n\n    return(\n        <ConfigProvider\n          theme={{\n            components: {\n              Checkbox: {\n                colorText:\"black\"\n              },\n            },\n          }}        \n        >\n          <div style={{ overflowY: 'auto', maxHeight: '300px' }}>\n            <Checkbox indeterminate={indeterminate} onChange={onCheckAllChange} checked={checkAll}>\n                Check all\n            </Checkbox>\n            <CheckboxGroup options={data} value={checkedList} onChange={onChange}/>\n          </div>\n          <Info  isExact={DatasetData.isExact} data={DatasetData.datasets}/>\n          <div style={{marginTop:\"20px\"}}>\n            <Select\n            style={{minWidth: 170, margin: 5}}\n            options={[ {value: \"Narrow Search Space\"},\n                        {value: \"Initialization\"},\n                        {value: \"Pre-train\"},\n                        {value: \"Surrogate Model\"},\n                        {value: \"Acquisition Function\"},\n                        {value: \"Normalizer\"}\n                    ]}\n            onChange={handleSelectChange}\n            />\n            <Select\n            style={{minWidth: 90, margin:5}}\n            placeholder = \"Dataset Selector\"\n            options = {DatasetSelector.map(item => ({ value: item.name })).concat({ value: \"None\" })}\n            onChange={handleSelectorChange}\n            />\n            <Input style={{width: 400, margin:5}} placeholder=\"Parameters\" onChange={handleParameterChange}/>\n            <Button type=\"primary\" htmlType=\"submit\" style={{width:\"120px\", margin:5}} onClick={handleClick}>\n              Submit\n            </Button>\n            <Button danger style={{width:\"120px\", margin:5}} onClick={handleDelete}>\n              Delete\n            </Button>\n          </div>\n        </ConfigProvider>\n    )\n}\n\n\nfunction Info({isExact, data}) {\n  if (isExact) {\n    return (\n      <div style={{ overflowY: 'auto', maxHeight: '250px' }}>\n        <h4><strong>Information</strong></h4>\n          <ul>\n            <li><h6><span className=\"fw-semi-bold\">Name</span>: {data.name}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Dim</span>: {data.dim}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Obj</span>: {data.obj}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Fidelity</span>: {data.fidelity}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Workloads</span>: {data.workloads}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Budget type</span>: {data.budget_type}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Budget</span>: {data.budget}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Seeds</span>: {data.seeds}</h6></li>\n          </ul>\n          <h4 className=\"mt-5\"><strong>Algorithm</strong></h4>\n          <ul>\n            <li><h6><span className=\"fw-semi-bold\">Space refiner</span>: {data.SpaceRefiner}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Sampler</span>: {data.Sampler}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Pretrain</span>: {data.Pretrain}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Model</span>: {data.Model}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">ACF</span>: {data.ACF}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">DatasetSelector</span>: {data.DatasetSelector}</h6></li>\n            <li><h6><span className=\"fw-semi-bold\">Normalizer</span>: {data.Normalizer}</h6></li>\n          </ul>\n          <h4 className=\"mt-5\"><strong>Auxiliary Data List</strong></h4>\n          <ul>\n            {data.metadata.map((dataset, index) => (\n              <li key={index}><h6>{dataset}</h6></li>\n            ))}\n          </ul>\n      </div>\n    )\n  } else {\n    return (\n      <></>\n    )\n  }\n}\n\nexport default SelectData"
  },
  {
    "path": "webui/src/features/seldata/index.js",
    "content": "import React from \"react\";\nimport {\n  Row,\n  Col,\n} from \"reactstrap\";\n\nimport TitleCard from \"../../components/Cards/TitleCard\"\n\nimport SelectData from \"./components/SelectData\";\nimport SearchData from \"./components/SearchData\"\nimport DataTable from \"./components/DataTable\";\n\n\nclass Dataselector extends React.Component {\n  constructor(props) {\n    super(props);\n    this.state = {\n      get_info: false,\n      DatasetData: {\"isExact\": false, \"datasets\": []},\n      SpaceRefiner: [],\n      SpaceRefinerDataSelector: \"\",\n      SpaceRefinerDataSelectorParameters: \"\",\n      Sampler: [],\n      SamplerDataSelector: \"\",\n      SamplerDataSelectorParameters: \"\",\n      Pretrain: [],\n      PretrainDataSelector: \"\",\n      PretrainDataSelectorParameters: \"\",\n      Model: [],\n      ModelDataSelector: \"\",\n      ModelDataSelectorParameters: \"\",\n      ACF: [],\n      ACFDataSelector: \"\",\n      ACFDataSelectorParameters: \"\",\n      Normalizer: [],\n      NormalizerDataSelector: \"\",\n      NormalizerDataSelectorParameters: \"\",\n      DatasetSelector: [],\n    };\n  }\n\n  updateTable = (newDatasets) => {\n    console.log(\"newDatasets\", newDatasets)\n    const { object, DatasetSelector, parameter, datasets } = newDatasets;\n    if (object === \"Narrow Search Space\") {\n      this.setState({ SpaceRefiner: datasets, SpaceRefinerDataSelector: DatasetSelector, SpaceRefinerDataSelectorParameters: parameter})\n    } else if (object === \"Initialization\") {\n      this.setState({ Sampler: datasets, SamplerDataSelector: DatasetSelector, SamplerDataSelectorParameters: parameter})\n    } else if (object === \"Pre-train\") {\n      this.setState({ Pretrain: datasets, PretrainDataSelector: DatasetSelector, PretrainDataSelectorParameters: parameter})\n    } else if (object === \"Surrogate Model\") {\n      this.setState({ Model: datasets, ModelDataSelector: DatasetSelector, ModelDataSelectorParameters: parameter})\n    } else if (object === \"Acquisition Function\") {\n      this.setState({ ACF: datasets, ACFDataSelector: DatasetSelector, ACFDataSelectorParameters: parameter})\n    } else if (object === \"Normalizer\") {\n      this.setState({ Normalizer: datasets, NormalizerDataSelector: DatasetSelector, NormalizerDataSelectorParameters: parameter})\n    }\n  }\n\n  set_dataset = (datasets) => {\n    console.log(datasets)\n    this.setState({ DatasetData: datasets })\n  }\n\n  render() {\n    if (this.state.get_info === false) {\n      // TODO: ask for task list from back-end\n      const messageToSend = {\n        action: 'ask for information',\n      }\n      fetch('http://localhost:5001/api/RunPage/get_info', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(data => {\n        console.log('Configuration infomation from back-end:', data);\n        this.setState({ get_info: true,  \n                        SpaceRefiner: data.datasets.SpaceRefiner,\n                        Sampler: data.datasets.Sampler,\n                        Pretrain: data.datasets.Pretrain,\n                        Model: data.datasets.Model,\n                        ACF: data.datasets.ACF,\n                        Normalizer: data.datasets.Normalizer,\n                        SpaceRefinerDataSelector: data.optimizer.SpaceRefinerDataSelector,\n                        SpaceRefinerDataSelectorParameters: data.optimizer.SpaceRefinerDataSelectorParameters,\n                        SamplerDataSelector: data.optimizer.SamplerDataSelector,\n                        SamplerDataSelectorParameters: data.optimizer.SamplerDataSelectorParameters,\n                        PretrainDataSelector: data.optimizer.PretrainDataSelector,\n                        PretrainDataSelectorParameters: data.optimizer.PretrainDataSelectorParameters,\n                        ModelDataSelector: data.optimizer.ModelDataSelector,\n                        ModelDataSelectorParameters: data.optimizer.ModelDataSelectorParameters,\n                        ACFDataSelector: data.optimizer.ACFDataSelector,\n                        ACFDataSelectorParameters: data.optimizer.ACFDataSelectorParameters,\n                        NormalizerDataSelector: data.optimizer.NormalizerDataSelector,\n                        NormalizerDataSelectorParameters: data.optimizer.NormalizerDataSelectorParameters,\n                      });\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n      });\n\n      fetch('http://localhost:5001/api/configuration/basic_information', {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n        },\n        body: JSON.stringify(messageToSend),\n      })\n      .then(response => {\n        if (!response.ok) {\n          throw new Error('Network response was not ok');\n        } \n        return response.json();\n      })\n      .then(data => {\n        console.log('Message from back-end:', data);\n        this.setState({ DatasetSelector: data.DataSelector });\n      })\n      .catch((error) => {\n        console.error('Error sending message:', error);\n      });\n\n  } else {\n      return (\n        <div>\n            <Row>\n              <Col lg={12} sm={12}> \n                <TitleCard>\n                  <SearchData set_dataset={this.set_dataset}/>\n                  <p>\n                    Choose the datasets you want to use in the experiment.\n                  </p>\n                  <SelectData DatasetData={this.state.DatasetData} updateTable={this.updateTable} DatasetSelector={this.state.DatasetSelector}/>\n                </TitleCard>\n              </Col>\n              <Col lg={12} xs={12}>\n                <TitleCard\n                  title={\n                    <h5>\n                      <span className=\"fw-semi-bold\">Selected Datasets</span>\n                    </h5>\n                  }\n                  collapse\n                >\n                  <DataTable SpaceRefiner={this.state.SpaceRefiner} \n                              SpaceRefinerDataSelector={this.state.SpaceRefinerDataSelector}\n                              SpaceRefinerDataSelectorParameters={this.state.SpaceRefinerDataSelectorParameters}\n                              Sampler={this.state.Sampler} \n                              SamplerDataSelector={this.state.SamplerDataSelector}\n                              SamplerDataSelectorParameters={this.state.SamplerDataSelectorParameters}\n                              Pretrain={this.state.Pretrain} \n                              PretrainDataSelector={this.state.PretrainDataSelector}\n                              PretrainDataSelectorParameters={this.state.PretrainDataSelectorParameters}\n                              Model={this.state.Model} \n                              ModelDataSelector={this.state.ModelDataSelector}\n                              ModelDataSelectorParameters={this.state.ModelDataSelectorParameters}\n                              ACF={this.state.ACF} \n                              ACFDataSelector={this.state.ACFDataSelector}\n                              ACFDataSelectorParameters={this.state.ACFDataSelectorParameters}\n                              Normalizer={this.state.Normalizer}\n                              NormalizerDataSelector={this.state.NormalizerDataSelector}\n                              NormalizerDataSelectorParameters={this.state.NormalizerDataSelectorParameters}\n                  />\n                </TitleCard>\n              </Col>\n            </Row>\n        </div>\n      );\n    }\n  }\n}\n\nexport default Dataselector;"
  },
  {
    "path": "webui/src/features/settings/billing/index.js",
    "content": "import moment from \"moment\"\nimport { useEffect, useState } from \"react\"\nimport { useDispatch, useSelector } from \"react-redux\"\nimport TitleCard from \"../../../components/Cards/TitleCard\"\nimport { showNotification } from '../../common/headerSlice'\n\n\n\nconst BILLS = [\n    {invoiceNo : \"#4567\", amount : \"23,989\", description : \"Product usages\", status : \"Pending\", generatedOn : moment(new Date()).add(-30*1, 'days').format(\"DD MMM YYYY\"),  paidOn : \"-\"},\n\n    {invoiceNo : \"#4523\", amount : \"34,989\", description : \"Product usages\", status : \"Pending\", generatedOn : moment(new Date()).add(-30*2, 'days').format(\"DD MMM YYYY\"), paidOn : \"-\"},\n\n    {invoiceNo : \"#4453\", amount : \"39,989\", description : \"Product usages\", status : \"Paid\", generatedOn : moment(new Date()).add(-30*3, 'days').format(\"DD MMM YYYY\"), paidOn : moment(new Date()).add(-24*2, 'days').format(\"DD MMM YYYY\")},\n\n    {invoiceNo : \"#4359\", amount : \"28,927\", description : \"Product usages\", status : \"Paid\", generatedOn : moment(new Date()).add(-30*4, 'days').format(\"DD MMM YYYY\"), paidOn : moment(new Date()).add(-24*3, 'days').format(\"DD MMM YYYY\")},\n\n    {invoiceNo : \"#3359\", amount : \"28,927\", description : \"Product usages\", status : \"Paid\", generatedOn : moment(new Date()).add(-30*5, 'days').format(\"DD MMM YYYY\"), paidOn : moment(new Date()).add(-24*4, 'days').format(\"DD MMM YYYY\")},\n\n    {invoiceNo : \"#3367\", amount : \"28,927\", description : \"Product usages\", status : \"Paid\", generatedOn : moment(new Date()).add(-30*6, 'days').format(\"DD MMM YYYY\"), paidOn : moment(new Date()).add(-24*5, 'days').format(\"DD MMM YYYY\")},\n\n    {invoiceNo : \"#3359\", amount : \"28,927\", description : \"Product usages\", status : \"Paid\", generatedOn : moment(new Date()).add(-30*7, 'days').format(\"DD MMM YYYY\"), paidOn : moment(new Date()).add(-24*6, 'days').format(\"DD MMM YYYY\")},\n\n    {invoiceNo : \"#2359\", amount : \"28,927\", description : \"Product usages\", status : \"Paid\", generatedOn : moment(new Date()).add(-30*8, 'days').format(\"DD MMM YYYY\"), paidOn : moment(new Date()).add(-24*7, 'days').format(\"DD MMM YYYY\")},\n\n\n]\n\nfunction Billing(){\n\n\n    const [bills, setBills] = useState(BILLS)\n\n    const getPaymentStatus = (status) => {\n        if(status  === \"Paid\")return <div className=\"badge badge-success\">{status}</div>\n        if(status  === \"Pending\")return <div className=\"badge badge-primary\">{status}</div>\n        else return <div className=\"badge badge-ghost\">{status}</div>\n    }\n\n    return(\n        <>\n            \n            <TitleCard title=\"Billing History\" topMargin=\"mt-2\">\n\n                {/* Invoice list in table format loaded constant */}\n            <div className=\"overflow-x-auto w-full\">\n                <table className=\"table w-full\">\n                    <thead>\n                    <tr>\n                        <th>Invoice No</th>\n                        <th>Invoice Generated On</th>\n                        <th>Description</th>\n                        <th>Amount</th>\n                        <th>Status</th>\n                        <th>Invoice Paid On</th>\n                    </tr>\n                    </thead>\n                    <tbody>\n                        {\n                            bills.map((l, k) => {\n                                return(\n                                    <tr key={k}>\n                                    <td>{l.invoiceNo}</td>\n                                    <td>{l.generatedOn}</td>\n                                    <td>{l.description}</td>\n                                    <td>${l.amount}</td>\n                                    <td>{getPaymentStatus(l.status)}</td>\n                                    <td>{l.paidOn}</td>\n                                    </tr>\n                                )\n                            })\n                        }\n                    </tbody>\n                </table>\n            </div>\n            </TitleCard>\n        </>\n    )\n}\n\n\nexport default Billing"
  },
  {
    "path": "webui/src/features/settings/profilesettings/index.js",
    "content": "import moment from \"moment\"\nimport { useEffect, useState } from \"react\"\nimport { useDispatch, useSelector } from \"react-redux\"\nimport TitleCard from \"../../../components/Cards/TitleCard\"\nimport { showNotification } from '../../common/headerSlice'\nimport InputText from '../../../components/Input/InputText'\nimport TextAreaInput from '../../../components/Input/TextAreaInput'\nimport ToogleInput from '../../../components/Input/ToogleInput'\n\nfunction ProfileSettings(){\n\n\n    const dispatch = useDispatch()\n\n    // Call API to update profile settings changes\n    const updateProfile = () => {\n        dispatch(showNotification({message : \"Profile Updated\", status : 1}))    \n    }\n\n    const updateFormValue = ({updateType, value}) => {\n        console.log(updateType)\n    }\n\n    return(\n        <>\n            \n            <TitleCard title=\"Profile Settings\" topMargin=\"mt-2\">\n\n                <div className=\"grid grid-cols-1 md:grid-cols-2 gap-6\">\n                    <InputText labelTitle=\"Name\" defaultValue=\"Alex\" updateFormValue={updateFormValue}/>\n                    <InputText labelTitle=\"Email Id\" defaultValue=\"alex@dashwind.com\" updateFormValue={updateFormValue}/>\n                    <InputText labelTitle=\"Title\" defaultValue=\"UI/UX Designer\" updateFormValue={updateFormValue}/>\n                    <InputText labelTitle=\"Place\" defaultValue=\"California\" updateFormValue={updateFormValue}/>\n                    <TextAreaInput labelTitle=\"About\" defaultValue=\"Doing what I love, part time traveller\" updateFormValue={updateFormValue}/>\n                </div>\n                <div className=\"divider\" ></div>\n\n                <div className=\"grid grid-cols-1 md:grid-cols-2 gap-6\">\n                    <InputText labelTitle=\"Language\" defaultValue=\"English\" updateFormValue={updateFormValue}/>\n                    <InputText labelTitle=\"Timezone\" defaultValue=\"IST\" updateFormValue={updateFormValue}/>\n                    <ToogleInput updateType=\"syncData\" labelTitle=\"Sync Data\" defaultValue={true} updateFormValue={updateFormValue}/>\n                </div>\n\n                <div className=\"mt-16\"><button className=\"btn btn-primary float-right\" onClick={() => updateProfile()}>Update</button></div>\n            </TitleCard>\n        </>\n    )\n}\n\n\nexport default ProfileSettings"
  },
  {
    "path": "webui/src/features/settings/team/index.js",
    "content": "import moment from \"moment\"\nimport { useEffect, useState } from \"react\"\nimport { useDispatch, useSelector } from \"react-redux\"\nimport TitleCard from \"../../../components/Cards/TitleCard\"\nimport { showNotification } from '../../common/headerSlice'\n\nconst TopSideButtons = () => {\n\n    const dispatch = useDispatch()\n\n    const addNewTeamMember = () => {\n        dispatch(showNotification({message : \"Add New Member clicked\", status : 1}))\n    }\n\n    return(\n        <div className=\"inline-block float-right\">\n            <button className=\"btn px-6 btn-sm normal-case btn-primary\" onClick={() => addNewTeamMember()}>Invite New</button>\n        </div>\n    )\n}\n\n\nconst TEAM_MEMBERS = [\n    {name : \"Alex\", avatar : \"https://reqres.in/img/faces/1-image.jpg\", email : \"alex@dashwind.com\", role : \"Owner\", joinedOn : moment(new Date()).add(-5*1, 'days').format(\"DD MMM YYYY\"), lastActive : \"5 hr ago\"},\n    {name : \"Ereena\", avatar : \"https://reqres.in/img/faces/2-image.jpg\", email : \"ereena@dashwind.com\", role : \"Admin\", joinedOn : moment(new Date()).add(-5*2, 'days').format(\"DD MMM YYYY\"), lastActive : \"15 min ago\"},\n    {name : \"John\", avatar : \"https://reqres.in/img/faces/3-image.jpg\", email : \"jhon@dashwind.com\", role : \"Admin\", joinedOn : moment(new Date()).add(-5*3, 'days').format(\"DD MMM YYYY\"), lastActive : \"20 hr ago\"},\n    {name : \"Matrix\", avatar : \"https://reqres.in/img/faces/4-image.jpg\", email : \"matrix@dashwind.com\", role : \"Manager\", joinedOn : moment(new Date()).add(-5*4, 'days').format(\"DD MMM YYYY\"), lastActive : \"1 hr ago\"},\n    {name : \"Virat\", avatar : \"https://reqres.in/img/faces/5-image.jpg\", email : \"virat@dashwind.com\", role : \"Support\", joinedOn : moment(new Date()).add(-5*5, 'days').format(\"DD MMM YYYY\"), lastActive : \"40 min ago\"},\n    {name : \"Miya\", avatar : \"https://reqres.in/img/faces/6-image.jpg\", email : \"miya@dashwind.com\", role : \"Support\", joinedOn : moment(new Date()).add(-5*7, 'days').format(\"DD MMM YYYY\"), lastActive : \"5 hr ago\"},\n\n]\n\nfunction Team(){\n\n\n    const [members, setMembers] = useState(TEAM_MEMBERS)\n\n    const getRoleComponent = (role) => {\n        if(role  === \"Admin\")return <div className=\"badge badge-secondary\">{role}</div>\n        if(role  === \"Manager\")return <div className=\"badge\">{role}</div>\n        if(role  === \"Owner\")return <div className=\"badge badge-primary\">{role}</div>\n        if(role  === \"Support\")return <div className=\"badge badge-accent\">{role}</div>\n        else return <div className=\"badge badge-ghost\">{role}</div>\n    }\n\n    return(\n        <>\n            \n            <TitleCard title=\"Active Members\" topMargin=\"mt-2\" TopSideButtons={<TopSideButtons />}>\n\n                {/* Team Member list in table format loaded constant */}\n            <div className=\"overflow-x-auto w-full\">\n                <table className=\"table w-full\">\n                    <thead>\n                    <tr>\n                        <th>Name</th>\n                        <th>Email Id</th>\n                        <th>Joined On</th>\n                        <th>Role</th>\n                        <th>Last Active</th>\n                    </tr>\n                    </thead>\n                    <tbody>\n                        {\n                            members.map((l, k) => {\n                                return(\n                                    <tr key={k}>\n                                    <td>\n                                        <div className=\"flex items-center space-x-3\">\n                                            <div className=\"avatar\">\n                                                <div className=\"mask mask-circle w-12 h-12\">\n                                                    <img src={l.avatar} alt=\"Avatar\" />\n                                                </div>\n                                            </div>\n                                            <div>\n                                                <div className=\"font-bold\">{l.name}</div>\n                                            </div>\n                                        </div>\n                                    </td>\n                                    <td>{l.email}</td>\n                                    <td>{l.joinedOn}</td>\n                                    <td>{getRoleComponent(l.role)}</td>\n                                    <td>{l.lastActive}</td>\n                                    </tr>\n                                )\n                            })\n                        }\n                    </tbody>\n                </table>\n            </div>\n            </TitleCard>\n        </>\n    )\n}\n\n\nexport default Team"
  },
  {
    "path": "webui/src/features/transactions/index.js",
    "content": "import moment from \"moment\"\nimport { useEffect, useState } from \"react\"\nimport { useDispatch, useSelector } from \"react-redux\"\nimport { showNotification } from \"../common/headerSlice\"\nimport TitleCard from \"../../components/Cards/TitleCard\"\nimport { RECENT_TRANSACTIONS } from \"../../utils/dummyData\"\nimport FunnelIcon from '@heroicons/react/24/outline/FunnelIcon'\nimport XMarkIcon from '@heroicons/react/24/outline/XMarkIcon'\nimport SearchBar from \"../../components/Input/SearchBar\"\n\nconst TopSideButtons = ({removeFilter, applyFilter, applySearch}) => {\n\n    const [filterParam, setFilterParam] = useState(\"\")\n    const [searchText, setSearchText] = useState(\"\")\n    const locationFilters = [\"Paris\", \"London\", \"Canada\", \"Peru\", \"Tokyo\"]\n\n    const showFiltersAndApply = (params) => {\n        applyFilter(params)\n        setFilterParam(params)\n    }\n\n    const removeAppliedFilter = () => {\n        removeFilter()\n        setFilterParam(\"\")\n        setSearchText(\"\")\n    }\n\n    useEffect(() => {\n        if(searchText == \"\"){\n            removeAppliedFilter()\n        }else{\n            applySearch(searchText)\n        }\n    }, [searchText])\n\n    return(\n        <div className=\"inline-block float-right\">\n            <SearchBar searchText={searchText} styleClass=\"mr-4\" setSearchText={setSearchText}/>\n            {filterParam != \"\" && <button onClick={() => removeAppliedFilter()} className=\"btn btn-xs mr-2 btn-active btn-ghost normal-case\">{filterParam}<XMarkIcon className=\"w-4 ml-2\"/></button>}\n            <div className=\"dropdown dropdown-bottom dropdown-end\">\n                <label tabIndex={0} className=\"btn btn-sm btn-outline\"><FunnelIcon className=\"w-5 mr-2\"/>Filter</label>\n                <ul tabIndex={0} className=\"dropdown-content menu p-2 text-sm shadow bg-base-100 rounded-box w-52\">\n                    {\n                        locationFilters.map((l, k) => {\n                            return  <li key={k}><a onClick={() => showFiltersAndApply(l)}>{l}</a></li>\n                        })\n                    }\n                    <div className=\"divider mt-0 mb-0\"></div>\n                    <li><a onClick={() => removeAppliedFilter()}>Remove Filter</a></li>\n                </ul>\n            </div>\n        </div>\n    )\n}\n\n\nfunction Transactions(){\n\n\n    const [trans, setTrans] = useState(RECENT_TRANSACTIONS)\n\n    const removeFilter = () => {\n        setTrans(RECENT_TRANSACTIONS)\n    }\n\n    const applyFilter = (params) => {\n        let filteredTransactions = RECENT_TRANSACTIONS.filter((t) => {return t.location == params})\n        setTrans(filteredTransactions)\n    }\n\n    // Search according to name\n    const applySearch = (value) => {\n        let filteredTransactions = RECENT_TRANSACTIONS.filter((t) => {return t.email.toLowerCase().includes(value.toLowerCase()) ||  t.email.toLowerCase().includes(value.toLowerCase())})\n        setTrans(filteredTransactions)\n    }\n\n    return(\n        <>\n            \n            <TitleCard title=\"Recent Transactions\" topMargin=\"mt-2\" TopSideButtons={<TopSideButtons applySearch={applySearch} applyFilter={applyFilter} removeFilter={removeFilter}/>}>\n\n                {/* Team Member list in table format loaded constant */}\n            <div className=\"overflow-x-auto w-full\">\n                <table className=\"table w-full\">\n                    <thead>\n                    <tr>\n                        <th>Name</th>\n                        <th>Email Id</th>\n                        <th>Location</th>\n                        <th>Amount</th>\n                        <th>Transaction Date</th>\n                    </tr>\n                    </thead>\n                    <tbody>\n                        {\n                            trans.map((l, k) => {\n                                return(\n                                    <tr key={k}>\n                                    <td>\n                                        <div className=\"flex items-center space-x-3\">\n                                            <div className=\"avatar\">\n                                                <div className=\"mask mask-circle w-12 h-12\">\n                                                    <img src={l.avatar} alt=\"Avatar\" />\n                                                </div>\n                                            </div>\n                                            <div>\n                                                <div className=\"font-bold\">{l.name}</div>\n                                            </div>\n                                        </div>\n                                    </td>\n                                    <td>{l.email}</td>\n                                    <td>{l.location}</td>\n                                    <td>${l.amount}</td>\n                                    <td>{moment(l.date).format(\"D MMM\")}</td>\n                                    </tr>\n                                )\n                            })\n                        }\n                    </tbody>\n                </table>\n            </div>\n            </TitleCard>\n        </>\n    )\n}\n\n\nexport default Transactions"
  },
  {
    "path": "webui/src/features/user/ForgotPassword.js",
    "content": "import {useState, useRef} from 'react'\nimport {Link} from 'react-router-dom'\nimport LandingIntro from './LandingIntro'\nimport ErrorText from  '../../components/Typography/ErrorText'\nimport InputText from '../../components/Input/InputText'\nimport CheckCircleIcon  from '@heroicons/react/24/solid/CheckCircleIcon'\n\nfunction ForgotPassword(){\n\n    const INITIAL_USER_OBJ = {\n        emailId : \"\"\n    }\n\n    const [loading, setLoading] = useState(false)\n    const [errorMessage, setErrorMessage] = useState(\"\")\n    const [linkSent, setLinkSent] = useState(false)\n    const [userObj, setUserObj] = useState(INITIAL_USER_OBJ)\n\n    const submitForm = (e) =>{\n        e.preventDefault()\n        setErrorMessage(\"\")\n\n        if(userObj.emailId.trim() === \"\")return setErrorMessage(\"Email Id is required! (use any value)\")\n        else{\n            setLoading(true)\n            // Call API to send password reset link\n            setLoading(false)\n            setLinkSent(true)\n        }\n    }\n\n    const updateFormValue = ({updateType, value}) => {\n        setErrorMessage(\"\")\n        setUserObj({...userObj, [updateType] : value})\n    }\n\n    return(\n        <div className=\"min-h-screen bg-base-200 flex items-center\">\n            <div className=\"card mx-auto w-full max-w-5xl  shadow-xl\">\n                <div className=\"grid  md:grid-cols-2 grid-cols-1  bg-base-100 rounded-xl\">\n                <div className=''>\n                        <LandingIntro />\n                </div>\n                <div className='py-24 px-10'>\n                    <h2 className='text-2xl font-semibold mb-2 text-center'>Forgot Password</h2>\n\n                    {\n                        linkSent && \n                        <>\n                            <div className='text-center mt-8'><CheckCircleIcon className='inline-block w-32 text-success'/></div>\n                            <p className='my-4 text-xl font-bold text-center'>Link Sent</p>\n                            <p className='mt-4 mb-8 font-semibold text-center'>Check your email to reset password</p>\n                            <div className='text-center mt-4'><Link to=\"/login\"><button className=\"btn btn-block btn-primary \">Login</button></Link></div>\n\n                        </>\n                    }\n\n                    {\n                        !linkSent && \n                        <>\n                            <p className='my-8 font-semibold text-center'>We will send password reset link on your email Id</p>\n                            <form onSubmit={(e) => submitForm(e)}>\n\n                                <div className=\"mb-4\">\n\n                                    <InputText type=\"emailId\" defaultValue={userObj.emailId} updateType=\"emailId\" containerStyle=\"mt-4\" labelTitle=\"Email Id\" updateFormValue={updateFormValue}/>\n\n\n                                </div>\n\n                                <ErrorText styleClass=\"mt-12\">{errorMessage}</ErrorText>\n                                <button type=\"submit\" className={\"btn mt-2 w-full btn-primary\" + (loading ? \" loading\" : \"\")}>Send Reset Link</button>\n\n                                <div className='text-center mt-4'>Don't have an account yet? <Link to=\"/register\"><button className=\"  inline-block  hover:text-primary hover:underline hover:cursor-pointer transition duration-200\">Register</button></Link></div>\n                            </form>\n                        </>\n                    }\n                    \n                </div>\n            </div>\n            </div>\n        </div>\n    )\n}\n\nexport default ForgotPassword"
  },
  {
    "path": "webui/src/features/user/LandingIntro.js",
    "content": "import TemplatePointers from \"./components/TemplatePointers\"\n\n\n\nfunction LandingIntro(){\n\n    return(\n        <div className=\"hero min-h-full rounded-l-xl bg-base-200\">\n            <div className=\"hero-content py-12\">\n              <div className=\"max-w-md\">\n\n              <h1 className='text-3xl text-center font-bold '><img src=\"/transopt.png\" className=\"w-12 inline-block mr-2 mask\" alt=\"transopt\" /></h1>\n\n                <div className=\"text-center mt-12\"><img src=\"./transopt.png\" alt=\"TransOPT\" className=\"w-48 inline-block\"></img></div>\n              \n              {/* Importing pointers component */}\n              <TemplatePointers />\n              \n              </div>\n\n            </div>\n          </div>\n    )\n      \n  }\n  \n  export default LandingIntro"
  },
  {
    "path": "webui/src/features/user/Login.js",
    "content": "import {useState, useRef} from 'react'\nimport {Link} from 'react-router-dom'\nimport LandingIntro from './LandingIntro'\nimport ErrorText from  '../../components/Typography/ErrorText'\nimport InputText from '../../components/Input/InputText'\n\nfunction Login(){\n\n    const INITIAL_LOGIN_OBJ = {\n        password : \"\",\n        emailId : \"\"\n    }\n\n    const [loading, setLoading] = useState(false)\n    const [errorMessage, setErrorMessage] = useState(\"\")\n    const [loginObj, setLoginObj] = useState(INITIAL_LOGIN_OBJ)\n\n    const submitForm = (e) =>{\n        e.preventDefault()\n        setErrorMessage(\"\")\n\n        if(loginObj.emailId.trim() === \"\")return setErrorMessage(\"Email Id is required! (use any value)\")\n        if(loginObj.password.trim() === \"\")return setErrorMessage(\"Password is required! (use any value)\")\n        else{\n            setLoading(true)\n            // Call API to check user credentials and save token in localstorage\n            localStorage.setItem(\"token\", \"DumyTokenHere\")\n            setLoading(false)\n            window.location.href = '/app/welcome'\n        }\n    }\n\n    const updateFormValue = ({updateType, value}) => {\n        setErrorMessage(\"\")\n        setLoginObj({...loginObj, [updateType] : value})\n    }\n\n    return(\n        <div className=\"min-h-screen bg-base-200 flex items-center\">\n            <div className=\"card mx-auto w-full max-w-5xl  shadow-xl\">\n                <div className=\"grid  md:grid-cols-2 grid-cols-1  bg-base-100 rounded-xl\">\n                <div className=''>\n                        <LandingIntro />\n                </div>\n                <div className='py-24 px-10'>\n                    <h2 className='text-2xl font-semibold mb-2 text-center'>Login</h2>\n                    <form onSubmit={(e) => submitForm(e)}>\n\n                        <div className=\"mb-4\">\n\n                            <InputText type=\"emailId\" defaultValue={loginObj.emailId} updateType=\"emailId\" containerStyle=\"mt-4\" labelTitle=\"Email Id\" updateFormValue={updateFormValue}/>\n\n                            <InputText defaultValue={loginObj.password} type=\"password\" updateType=\"password\" containerStyle=\"mt-4\" labelTitle=\"Password\" updateFormValue={updateFormValue}/>\n\n                        </div>\n\n                        <div className='text-right text-primary'><Link to=\"/forgot-password\"><span className=\"text-sm  inline-block  hover:text-primary hover:underline hover:cursor-pointer transition duration-200\">Forgot Password?</span></Link>\n                        </div>\n\n                        <ErrorText styleClass=\"mt-8\">{errorMessage}</ErrorText>\n                        <button type=\"submit\" className={\"btn mt-2 w-full btn-primary\" + (loading ? \" loading\" : \"\")}>Login</button>\n\n                        <div className='text-center mt-4'>Don't have an account yet? <Link to=\"/register\"><span className=\"  inline-block  hover:text-primary hover:underline hover:cursor-pointer transition duration-200\">Register</span></Link></div>\n                    </form>\n                </div>\n            </div>\n            </div>\n        </div>\n    )\n}\n\nexport default Login"
  },
  {
    "path": "webui/src/features/user/Register.js",
    "content": "import {useState, useRef} from 'react'\nimport {Link} from 'react-router-dom'\nimport LandingIntro from './LandingIntro'\nimport ErrorText from  '../../components/Typography/ErrorText'\nimport InputText from '../../components/Input/InputText'\n\nfunction Register(){\n\n    const INITIAL_REGISTER_OBJ = {\n        name : \"\",\n        password : \"\",\n        emailId : \"\"\n    }\n\n    const [loading, setLoading] = useState(false)\n    const [errorMessage, setErrorMessage] = useState(\"\")\n    const [registerObj, setRegisterObj] = useState(INITIAL_REGISTER_OBJ)\n\n    const submitForm = (e) =>{\n        e.preventDefault()\n        setErrorMessage(\"\")\n\n        if(registerObj.name.trim() === \"\")return setErrorMessage(\"Name is required! (use any value)\")\n        if(registerObj.emailId.trim() === \"\")return setErrorMessage(\"Email Id is required! (use any value)\")\n        if(registerObj.password.trim() === \"\")return setErrorMessage(\"Password is required! (use any value)\")\n        else{\n            setLoading(true)\n            // Call API to check user credentials and save token in localstorage\n            localStorage.setItem(\"token\", \"DumyTokenHere\")\n            setLoading(false)\n            window.location.href = '/app/welcome'\n        }\n    }\n\n    const updateFormValue = ({updateType, value}) => {\n        setErrorMessage(\"\")\n        setRegisterObj({...registerObj, [updateType] : value})\n    }\n\n    return(\n        <div className=\"min-h-screen bg-base-200 flex items-center\">\n            <div className=\"card mx-auto w-full max-w-5xl  shadow-xl\">\n                <div className=\"grid  md:grid-cols-2 grid-cols-1  bg-base-100 rounded-xl\">\n                <div className=''>\n                        <LandingIntro />\n                </div>\n                <div className='py-24 px-10'>\n                    <h2 className='text-2xl font-semibold mb-2 text-center'>Register</h2>\n                    <form onSubmit={(e) => submitForm(e)}>\n\n                        <div className=\"mb-4\">\n\n                            <InputText defaultValue={registerObj.name} updateType=\"name\" containerStyle=\"mt-4\" labelTitle=\"Name\" updateFormValue={updateFormValue}/>\n\n                            <InputText defaultValue={registerObj.emailId} updateType=\"emailId\" containerStyle=\"mt-4\" labelTitle=\"Email Id\" updateFormValue={updateFormValue}/>\n\n                            <InputText defaultValue={registerObj.password} type=\"password\" updateType=\"password\" containerStyle=\"mt-4\" labelTitle=\"Password\" updateFormValue={updateFormValue}/>\n\n                        </div>\n\n                        <ErrorText styleClass=\"mt-8\">{errorMessage}</ErrorText>\n                        <button type=\"submit\" className={\"btn mt-2 w-full btn-primary\" + (loading ? \" loading\" : \"\")}>Register</button>\n\n                        <div className='text-center mt-4'>Already have an account? <Link to=\"/login\"><span className=\"  inline-block  hover:text-primary hover:underline hover:cursor-pointer transition duration-200\">Login</span></Link></div>\n                    </form>\n                </div>\n            </div>\n            </div>\n        </div>\n    )\n}\n\nexport default Register"
  },
  {
    "path": "webui/src/features/user/components/TemplatePointers.js",
    "content": "function TemplatePointers(){\n    return(\n        <>\n         <h1 className=\"text-2xl mt-8 font-bold\">Admin Dashboard Starter Kit</h1>\n          <p className=\"py-2 mt-4\">✓ <span className=\"font-semibold\">Light/dark</span> mode toggle</p>\n          <p className=\"py-2 \">✓ <span className=\"font-semibold\">Redux toolkit</span> and other utility libraries configured</p>\n          <p className=\"py-2\">✓ <span className=\"font-semibold\">Calendar, Modal, Sidebar </span> components</p>\n          <p className=\"py-2  \">✓ User-friendly <span className=\"font-semibold\">documentation</span></p>\n          <p className=\"py-2  mb-4\">✓ <span className=\"font-semibold\">Daisy UI</span> components, <span className=\"font-semibold\">Tailwind CSS</span> support</p>\n        </>\n    )\n}\n\nexport default TemplatePointers"
  },
  {
    "path": "webui/src/index.css",
    "content": "@tailwind base;\n@tailwind components;\n@tailwind utilities;\n\n.loading-indicator:before {\n    content: '';\n    background: #00000080;\n    position: fixed;\n    width: 100%;\n    height: 100%;\n    top: 0;\n    left: 0;\n    z-index: 1000;\n  }\n  \n  .loading-indicator:after {\n    content: ' ';\n    position: fixed;\n    top: 40%;\n    left: 45%;\n    z-index: 10010;\n    color:white;\n    text-align:center;\n    font-weight:bold;\n    font-size:1.2rem;        \n    border: 16px solid #f3f3f3; /* Light grey */\n    border-top: 16px solid #0474bf; /* Blue */\n    border-radius: 50%;\n    width: 120px;\n    height: 120px;\n    animation: spin 2s linear infinite;\n}"
  },
  {
    "path": "webui/src/index.js",
    "content": "import React,  { Suspense } from 'react';\nimport ReactDOM from 'react-dom/client';\nimport './index.css';\nimport App from './App';\nimport reportWebVitals from './reportWebVitals';\nimport store from './app/store'\nimport { Provider } from 'react-redux'\nimport SuspenseContent from './containers/SuspenseContent';\n\nconst root = ReactDOM.createRoot(document.getElementById('root'));\nroot.render(\n  // <React.StrictMode>\n    <Suspense fallback={<SuspenseContent />}>\n        <Provider store={store}>\n            <App />\n        </Provider>\n    </Suspense>\n  // </React.StrictMode>\n);\n\n// If you want to start measuring performance in your app, pass a function\n// to log results (for example: reportWebVitals(console.log))\n// or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals\nreportWebVitals();\n"
  },
  {
    "path": "webui/src/pages/GettingStarted.js",
    "content": "import {useState, useRef} from 'react'\nimport {Link} from 'react-router-dom'\nimport DocGettingStarted from '../features/documentation/DocGettingStarted'\n\nfunction ExternalPage(){\n\n\n    return(\n        <div className=\"\">\n            <DocGettingStarted />\n        </div>\n    )\n}\n\nexport default ExternalPage"
  },
  {
    "path": "webui/src/pages/protected/404.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport FaceFrownIcon  from '@heroicons/react/24/solid/FaceFrownIcon'\n\nfunction InternalPage(){\n\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"\"}))\n      }, [])\n\n    return(\n        <div className=\"hero h-4/5 bg-base-200\">\n            <div className=\"hero-content text-accent text-center\">\n                <div className=\"max-w-md\">\n                <FaceFrownIcon className=\"h-48 w-48 inline-block\"/>\n                <h1 className=\"text-5xl  font-bold\">404 - Not Found</h1>\n                </div>\n            </div>\n        </div>\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Algorithm.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport Algorithm from '../../features/algorithm/index'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Choose Algorithm Objects\"}))\n      }, [])\n\n\n    return(\n        <Algorithm />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Analytics.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport Analytics from '../../features/analytics/index'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Analytics\"}))\n      }, [])\n\n\n    return(\n        <Analytics />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Bills.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport Billing from '../../features/settings/billing'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Bills\"}))\n      }, [])\n\n\n    return(\n        <Billing />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Blank.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\n\nimport DocumentIcon  from '@heroicons/react/24/solid/DocumentIcon'\n\nfunction InternalPage(){\n\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Page Title\"}))\n      }, [])\n      \n    return(\n        <div className=\"hero h-4/5 bg-base-200\">\n            <div className=\"hero-content text-accent text-center\">\n                <div className=\"max-w-md\">\n                <DocumentIcon className=\"h-48 w-48 inline-block\"/>\n                <h1 className=\"text-5xl mt-2 font-bold\">Blank Page</h1>\n                </div>\n            </div>\n        </div>\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Calendar.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport Calendar from '../../features/calendar'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Calendar\"}))\n      }, [])\n\n\n    return(\n        <Calendar />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Charts.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport Charts from '../../features/charts'\nimport { setPageTitle } from '../../features/common/headerSlice'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Analytics\"}))\n      }, [])\n\n\n    return(\n        <Charts />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/ChatOpt.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport ChatBot from '../../features/chatbot/ChatBot'\nimport { setPageTitle } from '../../features/common/headerSlice'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"ChatOpt\"}))\n      }, [])\n\n\n    return(\n        <ChatBot/>\n    )\n}\n\nexport default InternalPage\n\n\n\n\n\n\n"
  },
  {
    "path": "webui/src/pages/protected/Dashboard.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport Dashboard from '../../features/dashboard/index'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Dashboard\"}))\n      }, [])\n\n\n    return(\n        <Dashboard />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Experiment.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport Experiment from '../../features/experiment/index'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Create New Experiment\" }))\n      }, [])\n\n\n    return(\n        <Experiment />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Integration.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport Integration from '../../features/integration'\n\nfunction InternalPage(){\n\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Integrations\"}))\n      }, [])\n      \n    return(\n        <Integration />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Leads.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport Leads from '../../features/leads'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Leads\"}))\n      }, [])\n\n\n    return(\n        <Leads />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/ProfileSettings.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport ProfileSettings from '../../features/settings/profilesettings'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Settings\"}))\n      }, [])\n\n\n    return(\n        <ProfileSettings />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Run.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport RunPage from '../../features/run/index'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Run Optimization\"}))\n      }, [])\n\n\n    return(\n        <RunPage />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Seldata.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport Seldata from '../../features/seldata/index'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Select Datasets\"}))\n      }, [])\n\n\n    return(\n        <Seldata />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Team.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport Team from '../../features/settings/team'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Team Members\"}))\n      }, [])\n\n\n    return(\n        <Team/>\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Transactions.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport Transactions from '../../features/transactions'\n\nfunction InternalPage(){\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"Transactions\"}))\n      }, [])\n\n\n    return(\n        <Transactions />\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/pages/protected/Welcome.js",
    "content": "import { useEffect } from 'react'\nimport { useDispatch } from 'react-redux'\nimport { setPageTitle } from '../../features/common/headerSlice'\nimport {Link} from 'react-router-dom'\nimport TemplatePointers from '../../features/user/components/TemplatePointers'\n\nfunction InternalPage(){\n\n    const dispatch = useDispatch()\n\n    useEffect(() => {\n        dispatch(setPageTitle({ title : \"\"}))\n      }, [])\n\n    return(\n      <div className=\"hero h-4/5 bg-base-200\">\n      <div className=\"hero-content\">\n        <div className=\"max-w-md\">\n            <TemplatePointers />\n            <Link to=\"/app/dashboard\"><button className=\"btn bg-base-100 btn-outline\">Get Started</button></Link>\n        </div>\n      </div>\n    </div>\n    )\n}\n\nexport default InternalPage"
  },
  {
    "path": "webui/src/reportWebVitals.js",
    "content": "const reportWebVitals = onPerfEntry => {\n  if (onPerfEntry && onPerfEntry instanceof Function) {\n    import('web-vitals').then(({ getCLS, getFID, getFCP, getLCP, getTTFB }) => {\n      getCLS(onPerfEntry);\n      getFID(onPerfEntry);\n      getFCP(onPerfEntry);\n      getLCP(onPerfEntry);\n      getTTFB(onPerfEntry);\n    });\n  }\n};\n\nexport default reportWebVitals;\n"
  },
  {
    "path": "webui/src/routes/index.js",
    "content": "// All components mapping with path for internal routes\n\nimport { lazy } from 'react'\n\nconst Dashboard = lazy(() => import('../pages/protected/Dashboard'))\nconst Welcome = lazy(() => import('../pages/protected/Welcome'))\nconst ChatOpt = lazy(() => import('../pages/protected/ChatOpt'))\nconst Experiment = lazy(() => import('../pages/protected/Experiment'))\nconst Run = lazy(() => import('../pages/protected/Run'))\nconst Selectdatasets = lazy(() => import('../pages/protected/Seldata'))\nconst Analytics = lazy(() => import('../pages/protected/Analytics'))\n\n\n\n// const Page404 = lazy(() => import('../pages/protected/404'))\n// const Blank = lazy(() => import('../pages/protected/Blank'))\n// const Charts = lazy(() => import('../pages/protected/Charts'))\n// const Leads = lazy(() => import('../pages/protected/Leads'))\n// const Integration = lazy(() => import('../pages/protected/Integration'))\n// const Calendar = lazy(() => import('../pages/protected/Calendar'))\n// const Team = lazy(() => import('../pages/protected/Team'))\n// const Transactions = lazy(() => import('../pages/protected/Transactions'))\n// const Bills = lazy(() => import('../pages/protected/Bills'))\n// const ProfileSettings = lazy(() => import('../pages/protected/ProfileSettings'))\nconst GettingStarted = lazy(() => import('../pages/GettingStarted'))\n// const DocFeatures = lazy(() => import('../pages/DocFeatures'))\n// const DocComponents = lazy(() => import('../pages/DocComponents'))\n\n\nconst routes = [\n  {\n    path: '/dashboard', // the url\n    component: Dashboard, // view rendered\n  },\n  {\n    path: '/welcome', // the url\n    component: Welcome, // view rendered\n  },\n\n  {\n    path: '/chatopt', // the url\n    component: ChatOpt, // view rendered\n  },\n\n  {\n    path: '/optimization/problem', // the url\n    component: Experiment, // view rendered\n  },\n\n\n  {\n    path: '/optimization/selectdatasets', // the url\n    component: Selectdatasets, // view rendered\n  },\n\n  {\n    path: '/optimization/run', // the url\n    component: Run, // view rendered\n  },\n\n  {\n    path: '/analytics', // the url\n    component: Analytics, // view rendered\n  },\n\n\n\n  // {\n  //   path: '/leads',\n  //   component: Leads,\n  // },\n  // {\n  //   path: '/settings-team',\n  //   component: Team,\n  // },\n  // {\n  //   path: '/calendar',\n  //   component: Calendar,\n  // },\n  // {\n  //   path: '/transactions',\n  //   component: Transactions,\n  // },\n  // {\n  //   path: '/settings-profile',\n  //   component: ProfileSettings,\n  // },\n  // {\n  //   path: '/settings-billing',\n  //   component: Bills,\n  // },\n  // {\n  //   path: '/getting-started',\n  //   component: GettingStarted,\n  // },\n  // {\n  //   path: '/features',\n  //   component: DocFeatures,\n  // },\n  // {\n  //   path: '/components',\n  //   component: DocComponents,\n  // },\n  // {\n  //   path: '/integration',\n  //   component: Integration,\n  // },\n  // {\n  //   path: '/charts',\n  //   component: Charts,\n  // },\n  // {\n  //   path: '/404',\n  //   component: Page404,\n  // },\n  // {\n  //   path: '/blank',\n  //   component: Blank,\n  // },\n]\n\nexport default routes\n"
  },
  {
    "path": "webui/src/routes/sidebar.js",
    "content": "/** Icons are imported separatly to reduce build time */\nimport Squares2X2Icon from '@heroicons/react/24/outline/Squares2X2Icon'\nimport ChartBarIcon from '@heroicons/react/24/outline/ChartBarIcon'\nimport ChatBubbleLeftIcon from '@heroicons/react/24/outline/ChatBubbleLeftIcon'\nimport QuestionMarkCircleIcon from '@heroicons/react/24/outline/QuestionMarkCircleIcon'\nimport FolderOpenIcon from '@heroicons/react/24/outline/FolderOpenIcon'\nimport PlayIcon from '@heroicons/react/24/outline/PlayIcon'\nimport CogIcon from '@heroicons/react/24/outline/CogIcon'\nimport AdjustmentsHorizontalIcon from '@heroicons/react/24/outline/AdjustmentsHorizontalIcon'\n\n\nconst iconClasses = `h-6 w-6`\nconst submenuIconClasses = `h-5 w-5`\n\nconst routes = [\n\n  {\n    path: '/app/dashboard',\n    icon: <Squares2X2Icon className={iconClasses}/>, \n    name: 'Dashboard',\n  },\n\n  {\n    path: '/app/optimization', //no url needed as this has submenu\n    icon: <CogIcon className={`${iconClasses} inline` }/>, // icon component\n    name: 'Experiments', // name that appear in Sidebar\n    submenu : [\n      {\n        path: '/app/optimization/problem',\n        icon: <QuestionMarkCircleIcon className={submenuIconClasses}/>,\n        name: 'Create New Experiment',\n      },\n\n      {\n        path: '/app/optimization/selectdatasets',\n        icon: <FolderOpenIcon className={submenuIconClasses}/>,\n        name: 'Select Datasets',\n      },\n      {\n        path: '/app/optimization/run',\n        icon: <PlayIcon className={submenuIconClasses}/>,\n        name: 'Run',\n      },\n    ]\n  },\n\n  {\n    path: '/app/analytics', // url\n    icon: <ChartBarIcon className={iconClasses}/>, // icon component\n    name: 'Analytics', // name that appear in Sidebar\n  },\n  \n  {\n    path: '/app/chatopt',\n    icon: <ChatBubbleLeftIcon className={iconClasses}/>, \n    name: 'ChatOpt',\n  },\n\n]\n\nexport default routes\n\n\n"
  },
  {
    "path": "webui/src/setupTests.js",
    "content": "// jest-dom adds custom jest matchers for asserting on DOM nodes.\n// allows you to do things like:\n// expect(element).toHaveTextContent(/react/i)\n// learn more: https://github.com/testing-library/jest-dom\nimport '@testing-library/jest-dom';\n"
  },
  {
    "path": "webui/src/utils/dummyData.js",
    "content": "const moment  = require(\"moment\");\n\nmodule.exports = Object.freeze({\n    CALENDAR_INITIAL_EVENTS : [\n        {title : \"Product call\", theme : \"GREEN\", startTime : moment().add(-12, 'd').startOf('day'), endTime : moment().add(-12, 'd').endOf('day')},\n        {title : \"Meeting with tech team\", theme : \"PINK\", startTime : moment().add(-8, 'd').startOf('day'), endTime : moment().add(-8, 'd').endOf('day')},\n        {title : \"Meeting with Cristina\", theme : \"PURPLE\", startTime : moment().add(-2, 'd').startOf('day'), endTime : moment().add(-2, 'd').endOf('day')},\n        {title : \"Meeting with Alex\", theme : \"BLUE\", startTime : moment().startOf('day'), endTime : moment().endOf('day')}, \n        {title : \"Product Call\", theme : \"GREEN\", startTime : moment().startOf('day'), endTime : moment().endOf('day')},\n        {title : \"Client Meeting\", theme : \"PURPLE\", startTime : moment().startOf('day'), endTime : moment().endOf('day')},\n        {title : \"Client Meeting\", theme : \"ORANGE\", startTime : moment().add(3, 'd').startOf('day'), endTime : moment().add(3, 'd').endOf('day')},\n        {title : \"Product meeting\", theme : \"PINK\", startTime : moment().add(5, 'd').startOf('day'), endTime : moment().add(5, 'd').endOf('day')},\n        {title : \"Sales Meeting\", theme : \"GREEN\", startTime : moment().add(8, 'd').startOf('day'), endTime : moment().add(8, 'd').endOf('day')},\n        {title : \"Product Meeting\", theme : \"ORANGE\", startTime : moment().add(8, 'd').startOf('day'), endTime : moment().add(8, 'd').endOf('day')},\n        {title : \"Marketing Meeting\", theme : \"PINK\", startTime : moment().add(8, 'd').startOf('day'), endTime : moment().add(8, 'd').endOf('day')},\n        {title : \"Client Meeting\", theme : \"GREEN\", startTime : moment().add(8, 'd').startOf('day'), endTime : moment().add(8, 'd').endOf('day')},\n        {title : \"Sales meeting\", theme : \"BLUE\", startTime : moment().add(12, 'd').startOf('day'), endTime : moment().add(12, 'd').endOf('day')},\n        {title : \"Client meeting\", theme : \"PURPLE\", startTime : moment().add(16, 'd').startOf('day'), endTime : moment().add(16, 'd').endOf('day')},\n    ],\n\n    RECENT_TRANSACTIONS : [\n        {name : \"Alex\", avatar : \"https://reqres.in/img/faces/1-image.jpg\", email : \"alex@dashwind.com\", location : \"Paris\", amount : 100, date : moment().endOf('day')},\n        {name : \"Ereena\", avatar : \"https://reqres.in/img/faces/2-image.jpg\", email : \"ereena@dashwind.com\", location : \"London\", amount : 190, date : moment().add(-1, 'd').endOf('day')},\n        {name : \"John\", avatar : \"https://reqres.in/img/faces/3-image.jpg\", email : \"jhon@dashwind.com\", location : \"Canada\", amount : 112, date : moment().add(-1, 'd').endOf('day')},\n        {name : \"Matrix\", avatar : \"https://reqres.in/img/faces/4-image.jpg\", email : \"matrix@dashwind.com\", location : \"Peru\", amount : 111, date : moment().add(-1, 'd').endOf('day')},\n        {name : \"Virat\", avatar : \"https://reqres.in/img/faces/5-image.jpg\", email : \"virat@dashwind.com\", location : \"London\", amount : 190, date : moment().add(-2, 'd').endOf('day')},\n        {name : \"Miya\", avatar : \"https://reqres.in/img/faces/6-image.jpg\", email : \"miya@dashwind.com\", location : \"Paris\", amount : 230, date : moment().add(-2, 'd').endOf('day')},\n        {name : \"Virat\", avatar : \"https://reqres.in/img/faces/3-image.jpg\", email : \"virat@dashwind.com\", location : \"Canada\", amount : 331, date : moment().add(-2, 'd').endOf('day')},\n        {name : \"Matrix\", avatar : \"https://reqres.in/img/faces/1-image.jpg\", email : \"matrix@dashwind.com\", location : \"London\", amount : 581, date : moment().add(-2, 'd').endOf('day')},\n        {name : \"Ereena\", avatar : \"https://reqres.in/img/faces/3-image.jpg\", email : \"ereena@dashwind.com\", location : \"Tokyo\", amount : 151, date : moment().add(-2, 'd').endOf('day')},\n        {name : \"John\", avatar : \"https://reqres.in/img/faces/2-image.jpg\", email : \"jhon@dashwind.com\", location : \"Paris\", amount : 91, date : moment().add(-2, 'd').endOf('day')},\n        {name : \"Virat\", avatar : \"https://reqres.in/img/faces/3-image.jpg\", email : \"virat@dashwind.com\", location : \"Canada\", amount : 161, date : moment().add(-3, 'd').endOf('day')},\n        {name : \"Matrix\", avatar : \"https://reqres.in/img/faces/4-image.jpg\", email : \"matrix@dashwind.com\", location : \"US\", amount : 121, date : moment().add(-3, 'd').endOf('day')},\n        {name : \"Ereena\", avatar : \"https://reqres.in/img/faces/6-image.jpg\", email : \"jhon@dashwind.com\", location : \"Tokyo\", amount : 713, date : moment().add(-3, 'd').endOf('day')},\n        {name : \"John\", avatar : \"https://reqres.in/img/faces/2-image.jpg\", email : \"ereena@dashwind.com\", location : \"London\", amount : 217, date : moment().add(-3, 'd').endOf('day')},\n        {name : \"Virat\", avatar : \"https://reqres.in/img/faces/3-image.jpg\", email : \"virat@dashwind.com\", location : \"Paris\", amount : 117, date : moment().add(-3, 'd').endOf('day')},\n        {name : \"Miya\", avatar : \"https://reqres.in/img/faces/7-image.jpg\", email : \"jhon@dashwind.com\", location : \"Canada\", amount : 612, date : moment().add(-3, 'd').endOf('day')},\n        {name : \"Matrix\", avatar : \"https://reqres.in/img/faces/3-image.jpg\", email : \"matrix@dashwind.com\", location : \"London\", amount : 631, date : moment().add(-3, 'd').endOf('day')},\n        {name : \"Virat\", avatar : \"https://reqres.in/img/faces/2-image.jpg\", email : \"ereena@dashwind.com\", location : \"Tokyo\", amount : 151, date : moment().add(-3, 'd').endOf('day')},\n        {name : \"Ereena\", avatar : \"https://reqres.in/img/faces/3-image.jpg\", email : \"virat@dashwind.com\", location : \"Paris\", amount : 617, date : moment().add(-3, 'd').endOf('day')},\n\n    \n    ]\n});\n"
  },
  {
    "path": "webui/src/utils/globalConstantUtil.js",
    "content": "\nmodule.exports = Object.freeze({\n    MODAL_BODY_TYPES : {\n        USER_DETAIL : \"USER_DETAIL\",\n        LEAD_ADD_NEW : \"LEAD_ADD_NEW\",\n        CONFIRMATION : \"CONFIRMATION\",\n        DEFAULT : \"\",\n    },\n\n    RIGHT_DRAWER_TYPES : {\n        NOTIFICATION : \"NOTIFICATION\",\n        CALENDAR_EVENTS : \"CALENDAR_EVENTS\",\n    },\n\n    CONFIRMATION_MODAL_CLOSE_TYPES : {\n        LEAD_DELETE : \"LEAD_DELETE\",\n    },\n});\n"
  },
  {
    "path": "webui/tailwind.config.js",
    "content": "/** @type {import('tailwindcss').Config} */\nmodule.exports = {\n  content: [\n    \"./src/**/*.{js,jsx,ts,tsx}\",\n    \"./node_modules/react-tailwindcss-datepicker/dist/index.esm.js\"\n  ],\n  darkMode: [\"class\", '[data-theme=\"dark\"]'],\n  theme: {\n    extend: {},\n  },\n  plugins: [require(\"@tailwindcss/typography\"), require(\"daisyui\")],\n  daisyui: {\n    themes: [\"light\", \"dark\"],\n  },\n\n}\n"
  }
]