Repository: qianlima-lab/time-series-ptms
Branch: master
Commit: 2c2e712779f8
Files: 486
Total size: 3.7 MB
Directory structure:
gitextract_owuozh1h/
├── .idea/
│ ├── .gitignore
│ ├── deployment.xml
│ ├── inspectionProfiles/
│ │ ├── Project_Default.xml
│ │ └── profiles_settings.xml
│ ├── modules.xml
│ ├── time-series-ptms.iml
│ └── vcs.xml
├── README.md
├── ts_anomaly_detection_methods/
│ ├── README.md
│ ├── anomaly_transformer/
│ │ ├── ATmodelbatch.py
│ │ ├── datautils.py
│ │ ├── models/
│ │ │ ├── __init__.py
│ │ │ ├── anomaly_transformer_model.py
│ │ │ ├── dilated_conv.py
│ │ │ ├── encoder.py
│ │ │ └── losses.py
│ │ ├── tasks/
│ │ │ ├── __init__.py
│ │ │ └── anomaly_detection.py
│ │ ├── train.py
│ │ ├── trainATbatch.py
│ │ ├── ts2vec.py
│ │ └── utils.py
│ └── other_anomaly_baselines/
│ ├── AT_solver.py
│ ├── ATmodelbatch.py
│ ├── README.md
│ ├── dataset_read_test.py
│ ├── datautils.py
│ ├── dcdetector_solver.py
│ ├── donut.py
│ ├── exp_anomaly_detection.py
│ ├── hello_test_evo.py
│ ├── lstm_vae.py
│ ├── metrics/
│ │ ├── AUC.py
│ │ ├── Matthews_correlation_coefficient.py
│ │ ├── affiliation/
│ │ │ ├── _affiliation_zone.py
│ │ │ ├── _integral_interval.py
│ │ │ ├── _single_ground_truth_event.py
│ │ │ ├── generics.py
│ │ │ └── metrics.py
│ │ ├── combine_all_scores.py
│ │ ├── customizable_f1_score.py
│ │ ├── evaluate_utils.py
│ │ ├── evaluator.py
│ │ ├── f1_score_f1_pa.py
│ │ ├── f1_series.py
│ │ ├── fc_score.py
│ │ ├── metrics.py
│ │ ├── precision_at_k.py
│ │ └── vus/
│ │ ├── analysis/
│ │ │ ├── robustness_eval.py
│ │ │ └── score_computation.py
│ │ ├── metrics.py
│ │ ├── models/
│ │ │ ├── distance.py
│ │ │ └── feature.py
│ │ └── utils/
│ │ ├── metrics.py
│ │ └── slidingWindows.py
│ ├── models/
│ │ ├── AnomalyTransformer.py
│ │ ├── DCdetector.py
│ │ ├── GPT4TS.py
│ │ ├── TimesNet.py
│ │ ├── __init__.py
│ │ ├── dilated_conv.py
│ │ ├── donut_model.py
│ │ ├── encoder.py
│ │ ├── losses.py
│ │ └── lstm_vae_model.py
│ ├── new_dataset_read_test.py
│ ├── scripts/
│ │ ├── at_zeta0.sh
│ │ ├── at_zeta1.sh
│ │ ├── generator_sh.py
│ │ ├── kpi.sh
│ │ ├── multi_at.sh
│ │ ├── ucr_at.sh
│ │ ├── ucr_at_delta_0.sh
│ │ ├── ucr_at_delta_1.sh
│ │ ├── ucr_at_delta_1_2.sh
│ │ ├── ucr_at_zeta0.sh
│ │ ├── uni_at.sh
│ │ └── yahoo.sh
│ ├── spot.py
│ ├── tasks/
│ │ ├── __init__.py
│ │ └── anomaly_detection.py
│ ├── train.py
│ ├── trainATbatch.py
│ ├── train_at_multi.py
│ ├── train_at_uni.py
│ ├── train_dcdetector.py
│ ├── train_dcdetector_nui.py
│ ├── train_donut.py
│ ├── train_donut_multi.py
│ ├── train_dspot.py
│ ├── train_dspot_multi.py
│ ├── train_gpt4ts.py
│ ├── train_gpt4ts_uni.py
│ ├── train_lstm_vae.py
│ ├── train_lstm_vae_multi.py
│ ├── train_spot.py
│ ├── train_spot_multi.py
│ ├── train_timesnet.py
│ ├── train_timesnet_uni.py
│ ├── train_ts2vec.py
│ ├── train_ts2vec_multi.py
│ ├── ts2vec.py
│ └── utils.py
├── ts_classification_methods/
│ ├── .gitignore
│ ├── README.md
│ ├── data/
│ │ ├── __init__.py
│ │ ├── dataloader.py
│ │ └── preprocessing.py
│ ├── environment.yaml
│ ├── gpt4ts/
│ │ ├── __init__.py
│ │ ├── gpt4ts_utils.py
│ │ ├── main_gpt4ts.py
│ │ ├── main_gpt4ts_ucr.py
│ │ ├── models/
│ │ │ ├── __init__.py
│ │ │ ├── embed.py
│ │ │ ├── gpt4ts.py
│ │ │ └── loss.py
│ │ └── scripts/
│ │ └── generator_gpt4ts.py
│ ├── model/
│ │ ├── __init__.py
│ │ ├── loss.py
│ │ └── tsm_model.py
│ ├── patchtst/
│ │ ├── __init__.py
│ │ ├── main_patchtst_iota.py
│ │ ├── main_patchtst_ucr.py
│ │ ├── mian_patchtst.py
│ │ ├── models/
│ │ │ ├── __init__.py
│ │ │ ├── attention.py
│ │ │ ├── basics.py
│ │ │ ├── heads.py
│ │ │ ├── patchTST.py
│ │ │ ├── pos_encoding.py
│ │ │ └── revin.py
│ │ ├── patch_mask.py
│ │ └── scripts/
│ │ └── generator_patchtst.py
│ ├── result_tsm/
│ │ ├── ChlorineConcentration/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── Crop/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── ECG5000/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── ElectricDevices/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── FordA/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── FordB/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── NonInvasiveFetalECGThorax1/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── NonInvasiveFetalECGThorax2/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── StarLightCurves/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── TwoPatterns/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── UWaveGestureLibraryAll/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── UWaveGestureLibraryX/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── UWaveGestureLibraryY/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ ├── UWaveGestureLibraryZ/
│ │ │ ├── classifier_weights.pt
│ │ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ │ ├── pretrain_weights.pt
│ │ │ └── rnn_reconstruction_pretrain_weights.pt
│ │ └── Wafer/
│ │ ├── classifier_weights.pt
│ │ ├── fcn_reconstruction_pretrain_weights.pt
│ │ ├── pretrain_weights.pt
│ │ └── rnn_reconstruction_pretrain_weights.pt
│ ├── scripts/
│ │ ├── dilated_single_norm.sh
│ │ ├── fcn_lin_set_norm.sh
│ │ ├── fcn_lin_single_norm.sh
│ │ ├── generator_dilated.py
│ │ ├── generator_fcn.py
│ │ ├── generator_pretrain_cls.py
│ │ └── transfer_pretrain_finetune.sh
│ ├── selftime_cls/
│ │ ├── __init__.py
│ │ ├── config/
│ │ │ ├── CricketX_config.json
│ │ │ ├── DodgerLoopDay_config.json
│ │ │ ├── InsectWingbeatSound_config.json
│ │ │ ├── MFPT_config.json
│ │ │ ├── UWaveGestureLibraryAll_config.json
│ │ │ └── XJTU_config.json
│ │ ├── dataloader/
│ │ │ ├── TSC_data_loader.py
│ │ │ ├── __init__.py
│ │ │ └── ucr2018.py
│ │ ├── dataprepare.py
│ │ ├── evaluation/
│ │ │ ├── __init__.py
│ │ │ └── eval_ssl.py
│ │ ├── model/
│ │ │ ├── __init__.py
│ │ │ ├── model_RelationalReasoning.py
│ │ │ └── model_backbone.py
│ │ ├── optim/
│ │ │ ├── __init__.py
│ │ │ ├── pretrain.py
│ │ │ ├── pytorchtools.py
│ │ │ └── train.py
│ │ ├── scripts/
│ │ │ └── ucr.sh
│ │ ├── train_ssl.py
│ │ └── utils/
│ │ ├── __init__.py
│ │ ├── augmentation.py
│ │ ├── datasets.py
│ │ ├── helper.py
│ │ ├── transforms.py
│ │ ├── utils.py
│ │ └── utils_plot.py
│ ├── test/
│ │ ├── __init__.py
│ │ ├── train_uea_test.py
│ │ └── uea_test.py
│ ├── timesnet/
│ │ ├── __init__.py
│ │ ├── main_timesnet.py
│ │ ├── main_timesnet_ucr.py
│ │ ├── models/
│ │ │ ├── Conv_Blocks.py
│ │ │ ├── Embed.py
│ │ │ ├── SelfAttention_Family.py
│ │ │ ├── TimesNet.py
│ │ │ ├── Transformer.py
│ │ │ ├── Transformer_EncDec.py
│ │ │ └── __init__.py
│ │ └── scripts/
│ │ └── generator_timesnet.py
│ ├── tloss_cls/
│ │ ├── default_hyperparameters.json
│ │ ├── losses/
│ │ │ ├── __init__.py
│ │ │ └── triplet_loss.py
│ │ ├── networks/
│ │ │ ├── __init__.py
│ │ │ ├── causal_cnn.py
│ │ │ └── lstm.py
│ │ ├── scikit_wrappers.py
│ │ ├── scripts/
│ │ │ ├── ucr.sh
│ │ │ └── uea.sh
│ │ ├── transfer_ucr.py
│ │ ├── ucr.py
│ │ ├── uea.py
│ │ └── utils.py
│ ├── train.py
│ ├── ts2vec_cls/
│ │ ├── __init__.py
│ │ ├── datautils.py
│ │ ├── models/
│ │ │ ├── __init__.py
│ │ │ ├── dilated_conv.py
│ │ │ ├── encoder.py
│ │ │ └── losses.py
│ │ ├── result/
│ │ │ └── ts2vec_tsm_train_val_b8_single_norm_0409_cls_result.csv
│ │ ├── scripts/
│ │ │ ├── generator_ts2vec.py
│ │ │ ├── generator_ts2vec_uea.py
│ │ │ ├── ts2vec_fcn_set_norm.sh
│ │ │ ├── ts2vec_fcn_single_norm.sh
│ │ │ ├── ts2vec_tsm_set_norm.sh
│ │ │ ├── ts2vec_tsm_single_norm.sh
│ │ │ └── ts2vec_tsm_uea.sh
│ │ ├── tasks/
│ │ │ ├── __init__.py
│ │ │ ├── _eval_protocols.py
│ │ │ └── classification.py
│ │ ├── train.py
│ │ ├── train_fcn.py
│ │ ├── train_tsm.py
│ │ ├── train_tsm_uea.py
│ │ ├── ts2vec.py
│ │ └── utils.py
│ ├── tsm_utils.py
│ ├── tst_cls/
│ │ ├── scripts/
│ │ │ ├── classification.sh
│ │ │ └── pretrain_finetune.sh
│ │ └── src/
│ │ ├── __init__.py
│ │ ├── dataprepare.py
│ │ ├── datasets/
│ │ │ ├── __init__.py
│ │ │ ├── data.py
│ │ │ ├── dataset.py
│ │ │ ├── datasplit.py
│ │ │ └── utils.py
│ │ ├── main.py
│ │ ├── models/
│ │ │ ├── __init__.py
│ │ │ ├── loss.py
│ │ │ └── ts_transformer.py
│ │ ├── optimizers.py
│ │ ├── options.py
│ │ ├── running.py
│ │ └── utils/
│ │ ├── __init__.py
│ │ ├── analysis.py
│ │ └── utils.py
│ ├── tstcc_cls/
│ │ ├── __init__.py
│ │ ├── config_files/
│ │ │ ├── ucr_Configs.py
│ │ │ └── uea_Configs.py
│ │ ├── dataloader/
│ │ │ ├── augmentations.py
│ │ │ └── dataloader.py
│ │ ├── main.py
│ │ ├── main_ucr.py
│ │ ├── main_uea.py
│ │ ├── models/
│ │ │ ├── TC.py
│ │ │ ├── attention.py
│ │ │ ├── loss.py
│ │ │ └── model.py
│ │ ├── result/
│ │ │ └── tstcc_0327_cls_result.csv
│ │ ├── scripts/
│ │ │ ├── fivefold_tstcc_ucr.sh
│ │ │ ├── fivefold_tstcc_uea.sh
│ │ │ ├── generator_ucr.py
│ │ │ ├── generator_uea.py
│ │ │ └── part_uea_tstcc.sh
│ │ ├── trainer/
│ │ │ └── trainer.py
│ │ └── utils.py
│ ├── visualize.py
│ └── visuals/
│ ├── GunPoint/
│ │ ├── classifier_NonInvasiveFetalECGThorax1_linear.pt
│ │ ├── direct_dilated_classifier.pt
│ │ ├── direct_dilated_encoder.pt
│ │ ├── direct_fcn_classifier.pt
│ │ ├── direct_fcn_encoder.pt
│ │ ├── encoder_NonInvasiveFetalECGThorax1_linear.pt
│ │ ├── supervised_classifier_ElectricDevices_linear.pt
│ │ ├── supervised_classifier_UWaveGestureLibraryX_linear.pt
│ │ ├── supervised_encoder_ElectricDevices_linear.pt
│ │ ├── supervised_encoder_UWaveGestureLibraryX_linear.pt
│ │ ├── unsupervised_classifier_UWaveGestureLibraryX_linear.pt
│ │ └── unsupervised_encoder_UWaveGestureLibraryX_linear.pt
│ ├── MixedShapesSmallTrain/
│ │ ├── direct_fcn_linear_encoder_weights.pt
│ │ ├── fcn_linear_encoder_finetune_weights_ElectricDevices.pt
│ │ └── fcn_linear_encoder_finetune_weights_UWaveGestureLibraryZ.pt
│ └── Wine/
│ ├── direct_fcn_encoder.pt
│ ├── direct_fcn_linear_encoder_weights.pt
│ ├── encoder_Crop_linear.pt
│ ├── encoder_NonInvasiveFetalECGThorax1_linear.pt
│ └── encoder_UWaveGestureLibraryZ_linear.pt
└── ts_forecasting_methods/
├── CoST/
│ ├── CODEOWNERS
│ ├── CODE_OF_CONDUCT.md
│ ├── LICENSE.txt
│ ├── README.md
│ ├── SECURITY.md
│ ├── cost.py
│ ├── datasets/
│ │ ├── PLACE_DATASETS_HERE
│ │ ├── electricity.py
│ │ └── m5.py
│ ├── datautils.py
│ ├── models/
│ │ ├── __init__.py
│ │ ├── dilated_conv.py
│ │ └── encoder.py
│ ├── requirements.txt
│ ├── scripts/
│ │ ├── ETT_CoST.sh
│ │ ├── Electricity_CoST.sh
│ │ ├── M5_CoST.sh
│ │ └── Weather_CoST.sh
│ ├── tasks/
│ │ ├── __init__.py
│ │ ├── _eval_protocols.py
│ │ └── forecasting.py
│ ├── train.py
│ └── utils.py
├── Other_baselines/
│ ├── README.md
│ ├── __init__.py
│ ├── data_config.yml
│ ├── data_provider/
│ │ ├── __init__.py
│ │ ├── data_factory.py
│ │ ├── data_factory_tempo.py
│ │ ├── data_loader.py
│ │ ├── data_loader_tempo.py
│ │ ├── m4.py
│ │ └── uea.py
│ ├── exp/
│ │ ├── __init__.py
│ │ ├── exp_basic.py
│ │ ├── exp_basic_patch.py
│ │ ├── exp_long_term_forecasting.py
│ │ ├── exp_main.py
│ │ └── exp_short_term_forecasting.py
│ ├── layers/
│ │ ├── AutoCorrelation.py
│ │ ├── Autoformer_EncDec.py
│ │ ├── Conv_Blocks.py
│ │ ├── Embed.py
│ │ ├── PatchTST_backbone.py
│ │ ├── PatchTST_layers.py
│ │ ├── RevIN.py
│ │ ├── SelfAttention_Family.py
│ │ ├── Transformer_EncDec.py
│ │ └── __init__.py
│ ├── models/
│ │ ├── Autoformer.py
│ │ ├── DLinear.py
│ │ ├── GPT4TS.py
│ │ ├── Informer.py
│ │ ├── LogTrans.py
│ │ ├── PatchTST.py
│ │ ├── PatchTST_raw.py
│ │ ├── TCN.py
│ │ ├── TEMPO.py
│ │ ├── TimesNet.py
│ │ ├── __init__.py
│ │ └── iTransformer.py
│ ├── train_autoformer.py
│ ├── train_cost.py
│ ├── train_dlinear.py
│ ├── train_gpt4ts.py
│ ├── train_informer.py
│ ├── train_itransformer.py
│ ├── train_logtrans.py
│ ├── train_patchtst.py
│ ├── train_tcn.py
│ ├── train_tempo.py
│ ├── train_timesnet.py
│ ├── train_ts2vec.py
│ └── utils/
│ ├── ADFtest.py
│ ├── __init__.py
│ ├── augmentation.py
│ ├── dtw.py
│ ├── dtw_metric.py
│ ├── losses.py
│ ├── m4_summary.py
│ ├── masking.py
│ ├── metrics.py
│ ├── print_args.py
│ ├── rev_in.py
│ ├── timefeatures.py
│ ├── tools.py
│ └── tools_tempo.py
├── README.md
├── SupervisedBaselines/
│ ├── Dockerfile
│ ├── LICENSE
│ ├── Makefile
│ ├── README.md
│ ├── data_provider/
│ │ ├── __init__.py
│ │ ├── data_factory.py
│ │ └── data_loader.py
│ ├── environment.yml
│ ├── exp/
│ │ ├── __init__.py
│ │ ├── exp_basic.py
│ │ ├── exp_informer.py
│ │ └── exp_main.py
│ ├── layers/
│ │ ├── AutoCorrelation.py
│ │ ├── Autoformer_EncDec.py
│ │ ├── Embed.py
│ │ ├── SelfAttention_Family.py
│ │ ├── Transformer_EncDec.py
│ │ └── __init__.py
│ ├── requirements.txt
│ ├── run.py
│ └── utils/
│ ├── __init__.py
│ ├── download_data.py
│ ├── masking.py
│ ├── metrics.py
│ ├── timefeatures.py
│ └── tools.py
└── ts2vec/
├── README.md
├── __init__.py
├── data_provider/
│ ├── __init__.py
│ ├── data_factory.py
│ ├── data_loader.py
│ ├── m4.py
│ ├── metrics.py
│ ├── tools.py
│ └── uea.py
├── datautils.py
├── forecasting_datasets_load_test.py
├── models/
│ ├── __init__.py
│ ├── dilated_conv.py
│ ├── encoder.py
│ └── losses.py
├── requirements.txt
├── scripts/
│ ├── electricity.sh
│ ├── ett.sh
│ ├── kpi.sh
│ ├── ucr.sh
│ ├── uea.sh
│ └── yahoo.sh
├── tasks/
│ ├── __init__.py
│ ├── _eval_protocols.py
│ ├── anomaly_detection.py
│ ├── classification.py
│ └── forecasting.py
├── train.py
├── ts2vec.py
└── utils.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .idea/.gitignore
================================================
# Default ignored files
/shelf/
/workspace.xml
# Editor-based HTTP Client requests
/httpRequests/
# Datasource local storage ignored files
/dataSources/
/dataSources.local.xml
================================================
FILE: .idea/deployment.xml
================================================
================================================
FILE: .idea/inspectionProfiles/Project_Default.xml
================================================
================================================
FILE: .idea/inspectionProfiles/profiles_settings.xml
================================================
================================================
FILE: .idea/modules.xml
================================================
================================================
FILE: .idea/time-series-ptms.iml
================================================
================================================
FILE: .idea/vcs.xml
================================================
================================================
FILE: README.md
================================================
# [A Survey on Time-Series Pre-Trained Models](https://arxiv.org/pdf/2305.10716v2)
This is the training code for our paper *"[A Survey on Time-Series Pre-Trained Models](https://arxiv.org/pdf/2305.10716v2)"*, which has been accepted for publication in the IEEE Transactions on Knowledge and Data Engineering (TKDE-24).
## Overview
Time-Series Mining (TSM) is an important research area since it shows great potential in practical applications. Deep learning models that rely on massive labeled data have been utilized for TSM successfully. However, constructing a large-scale well-labeled dataset is difficult due to data annotation costs.
Recently, pre-trained models have gradually attracted attention in the time series domain due to their remarkable performance in computer vision and natural language processing. In this survey, we provide a comprehensive review of Time-Series Pre-Trained Models (TS-PTMs), aiming to guide the understanding, applying, and studying TS-PTMs.
Specifically, we first briefly introduce the typical deep learning models employed in TSM. Then, we give an overview of TS-PTMs according to the pre-training techniques. The main categories we explore include supervised, unsupervised, and self-supervised TS-PTMs.
Further, extensive experiments involving 27 methods, 434 datasets, and 679 transfer learning scenarios are conducted to analyze the advantages and disadvantages of transfer learning strategies, Transformer-based models, and representative TS-PTMs. Finally, we point out some potential directions of TS-PTMs for future work.
## Datasets
The datasets used in this project are as follows:
### Time-Series Classification
* [128 UCR datasets](https://www.cs.ucr.edu/~eamonn/time_series_data_2018/UCRArchive_2018.zip)
* [30 UEA datasets](http://www.timeseriesclassification.com/Downloads/Archives/Multivariate2018_arff.zip)
* [SleepEEG dataset](https://www.physionet.org/content/sleep-edfx/1.0.0/)
* [Epilepsy dataset](https://repositori.upf.edu/handle/10230/42894)
* [FD-A and FD-B datasets](https://mb.uni-paderborn.de/en/kat/main-research/datacenter/bearing-datacenter/data-sets-and-download)
* [HAR dataset](https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones)
* [Gesture dataset](http://www.timeseriesclassification.com/description.php?Dataset=UWaveGestureLibrary)
* [ECG dataset](https://physionet.org/content/challenge-2017/1.0.0/)
* [EMG dataset](https://physionet.org/content/emgdb/1.0.0/)
### Time-Series Forecasting
* [ETDataset (including 4 datasets)](https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014)
* [Electricity](https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014)
* [Traffic](http://pems.dot.ca.gov)
* [Weather](https://www.bgc-jena.mpg.de/wetter)
* [Exchange](https://github.com/laiguokun/multivariate-time-series-data)
* [ILI](https://gis.cdc.gov/grasp/fluview/fluportaldashboard.html)
### Time-Series Anomaly Detection
* [Yahoo dataset](https://webscope.sandbox.yahoo.com/catalog.php?datatype=s&did=70)
* [KPI dataset](http://test-10056879.file.myqcloud.com/10056879/test/20180524_78431960010324/KPI%E5%BC%82%E5%B8%B8%E6%A3%80%E6%B5%8B%E5%86%B3%E8%B5%9B%E6%95%B0%E6%8D%AE%E9%9B%86.zip)
* [250 UCR anomaly detection datasets](https://wu.renjie.im/research/anomaly-benchmarks-are-flawed/#ucr-time-series-anomaly-archiv)
* [MSL dataset](https://github.com/khundman/telemanom)
* [SMAP dataset](https://github.com/eBay/RANSynCoders)
* [PSM dataset](https://github.com/khundman/telemanom)
* [SMD dataset](https://github.com/NetManAIOps/OmniAnomaly)
* [SWaT dataset](https://itrust.sutd.edu.sg/itrust-labs_datasets/dataset_info/#swat)
* [NIPS-TS-SWAN dataset](https://github.com/datamllab/tods/tree/benchmark/benchmark)
* [NIPS-TS-GECCO dataset](https://github.com/datamllab/tods/tree/benchmark/benchmark)
## Pre-Trained Models on Time Series Classification
- [x] [FCN](https://github.com/cauchyturing/UCR_Time_Series_Classification_Deep_Learning_Baseline)
- [x] [FCN Encoder+CNN Decoder](https://github.com/qianlima-lab/time-series-ptms/blob/master/ts_classification_methods/model/tsm_model.py)
- [x] [FCN Encoder+RNN Decoder](https://github.com/qianlima-lab/time-series-ptms/blob/master/ts_classification_methods/model/tsm_model.py)
- [x] [TCN](https://github.com/White-Link/UnsupervisedScalableRepresentationLearningTimeSeries)
- [x] [Transformer](https://github.com/gzerveas/mvts_transformer)
- [x] [TST](https://github.com/gzerveas/mvts_transformer)
- [x] [T-Loss](https://github.com/White-Link/UnsupervisedScalableRepresentationLearningTimeSeries)
- [x] [SelfTime](https://github.com/haoyfan/SelfTime)
- [x] [TS-TCC](https://github.com/emadeldeen24/TS-TCC)
- [x] [TS2Vec](https://github.com/zhihanyue/ts2vec)
- [x] [TimesNet](https://github.com/thuml/TimesNet)
- [x] [PatchTST](https://github.com/yuqinie98/PatchTST)
- [x] [GPT4TS](https://github.com/DAMO-DI-ML/NeurIPS2023-One-Fits-All)
For details, please refer to [ts_classification_methods/README](https://github.com/qianlima-lab/time-series-ptms/blob/master/ts_classification_methods/README.md).
## Pre-Trained Models on Time Series Forecasting
- [x] [LogTrans](https://github.com/AIStream-Peelout/flow-forecast)
- [x] [TCN](https://github.com/locuslab/TCN)
- [x] [Informer](https://github.com/zhouhaoyi/Informer2020)
- [x] [Autoformer](https://github.com/thuml/autoformer)
- [x] [TS2Vec](https://github.com/zhihanyue/ts2vec)
- [x] [CoST](https://github.com/salesforce/CoST)
- [x] [TimesNet](https://github.com/thuml/TimesNet)
- [x] [PatchTST](https://github.com/yuqinie98/PatchTST)
- [x] [DLinear](https://github.com/vivva/DLinear)
- [x] [GPT4TS](https://github.com/DAMO-DI-ML/NeurIPS2023-One-Fits-All)
- [x] [TEMPO](https://github.com/DC-research/TEMPO)
- [x] [iTransformer](https://github.com/thuml/iTransformer)
For details, please refer to [ts_forecating_methods/README](https://github.com/qianlima-lab/transfer-to-transformer-tsm/blob/master/ts_forecasting_methods/README.md).
## Pre-Trained Models on Time Series Anomaly Detection
- [x] [SPOT](https://github.com/limjcst/ads-evt)
- [x] [DSPOT](https://github.com/limjcst/ads-evt)
- [x] [LSTM-VAE](https://github.com/SchindlerLiang/VAE-for-Anomaly-Detection)
- [x] [DONUT](https://github.com/NetManAIOps/donut)
- [x] [Spectral Residual (SR)](https://dl.acm.org/doi/10.1145/3292500.3330680)
- [x] [Anomaly Transformer (AT)](https://github.com/spencerbraun/anomaly_transformer_pytorch)
- [x] [TS2Vec](https://github.com/zhihanyue/ts2vec)
- [x] [TimesNet](https://github.com/thuml/TimesNet)
- [x] [GPT4TS](https://github.com/DAMO-DI-ML/NeurIPS2023-One-Fits-All)
- [x] [DCdetector](https://github.com/DAMO-DI-ML/KDD2023-DCdetector)
For details, please refer to [ts_anomaly_detection_methods/README](https://github.com/qianlima-lab/transfer-to-transformer-tsm/blob/master/ts_anomaly_detection_methods/README.md).
## Acknowledgments
We thank the anonymous reviewers for their helpful feedback. We thank Professor **Eamonn Keogh** from UCR and all the people who have contributed to the UCR\&UEA time series archives and other time series datasets. The authors would like to thank
Professor **Garrison W. Cottrell** from UCSD, and **Chuxin Chen**, **Xidi Cai**, **Yu Chen**, and **Peitian Ma** from SCUT for the helpful suggestions.
## Citation
If you use this code for your research, please cite our paper:
```
@article{ma2024survey,
title={A survey on time-series pre-trained models},
author={Ma, Qianli and Liu, Zhen and Zheng, Zhenjing and Huang, Ziyang and Zhu, Siying and Yu, Zhongzhong and Kwok, James T},
journal={IEEE Transactions on Knowledge and Data Engineering},
year={2024}
}
```
================================================
FILE: ts_anomaly_detection_methods/README.md
================================================
This is the time-series anomaly detection training code for our paper *"A Survey on Time-Series Pre-Trained Models"*
## Baselines
| ID | Method | Year | Press | Source Code |
| :--: | :----------------------------------------------------------: | :--: | :-------: | :----------------------------------------------------------: |
| 1 | [SPOT](https://dl.acm.org/doi/abs/10.1145/3097983.3098144) | 2017 | KDD | [github_link](https://github.com/Amossys-team/SPOT) |
| 2 | [DSPOT](https://dl.acm.org/doi/abs/10.1145/3097983.3098144) | 2017 | KDD | [github_link](https://github.com/Amossys-team/SPOT) |
| 3 | [LSTM-VAE](https://ieeexplore.ieee.org/abstract/document/8279425) | 2018 | IEEE RA.L | [github_link](https://github.com/SchindlerLiang/VAE-for-Anomaly-Detection) |
| 4 | [DONUT](https://dl.acm.org/doi/abs/10.1145/3178876.3185996) | 2018 | WWW | [github_link](https://github.com/NetManAIOps/donut) |
| 5 | [Spectral Residual (SR)*](https://dl.acm.org/doi/abs/10.1145/3292500.3330680) | 2019 | KDD | - |
| 6 | [Anomaly Transformer (AT)](https://arxiv.org/abs/2110.02642) | 2022 | ICLR | [github_link](https://github.com/spencerbraun/anomaly_transformer_pytorch) |
| 7 | [TS2Vec](https://www.aaai.org/AAAI22Papers/AAAI-8809.YueZ.pdf) | 2022 | AAAI | [github_link](https://github.com/yuezhihan/ts2vec) |
| 8 | [TimesNet](https://openreview.net/pdf?id=ju_Uqw384Oq) | 2023 | ICLR | [github_link](https://github.com/thuml/TimesNet) |
| 9 | [GPT4TS](https://arxiv.org/abs/2302.11939) | 2023 | NeurIPS | [github_link](https://github.com/DAMO-DI-ML/NeurIPS2023-One-Fits-All) |
| 10 | [DCdetector](https://arxiv.org/abs/2306.10347) | 2023 | KDD | [github_link](https://github.com/DAMO-DI-ML/KDD2023-DCdetector) |
For details, please refer to [ts_anomaly_detection_methods/other_anomaly_baselines/README](https://github.com/qianlima-lab/time-series-ptms/blob/master/ts_anomaly_detection_methods/other_anomaly_baselines/README.md)
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/ATmodelbatch.py
================================================
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
import pdb
import numpy as np
from utils import data_slice,split_N_pad
import time
from torch.utils.data import DataLoader, TensorDataset, SequentialSampler
if torch.cuda.is_available():
torch.set_default_tensor_type('torch.cuda.DoubleTensor')
else:
torch.set_default_tensor_type('torch.DoubleTensor')
class AnomalyAttention(nn.Module):
def __init__(self, N, d_model):
super(AnomalyAttention, self).__init__()
self.d_model = d_model
self.N = N
self.Wq = nn.Linear(d_model, d_model, bias=False)
self.Wk = nn.Linear(d_model, d_model, bias=False)
self.Wv = nn.Linear(d_model, d_model, bias=False)
self.Ws = nn.Linear(d_model, 1, bias=False)
self.Q = self.K = self.V = self.sigma = torch.zeros((N, d_model))
self.P = torch.zeros((N, N))
self.S = torch.zeros((N, N))
def forward(self, x):
#x :[batch,N,d_model]
self.initialize(x)
self.S = self.series_association()
self.P = self.prior_association()
Z = self.reconstruction()
return Z
def initialize(self, x):
self.Q = self.Wq(x)
self.K = self.Wk(x)
self.V = self.Wv(x)
self.sigma = self.Ws(x)
@staticmethod
def gaussian_kernel(mean, sigma):
normalize = 1 / (math.sqrt(2 * torch.pi) * torch.abs(sigma))
return normalize * torch.exp(-0.5 * (mean / sigma).pow(2))
def prior_association(self):
# qwe = torch.from_numpy(
# np.abs(np.indices((self.N, self.N))[0] - np.indices((self.N, self.N))[1])
# ).cuda
qwe = torch.from_numpy(
np.abs(np.indices((self.N, self.N))[0] - np.indices((self.N, self.N))[1])
)
if torch.cuda.is_available():
qwe = qwe.cuda()
#原 gaussian: [batch,N,N]
#因为是高斯所以这里行列求和都一样
gaussian = self.gaussian_kernel(qwe.double(), self.sigma)
gaussian /= gaussian.sum(dim=-1).view(-1,self.N,1)
return gaussian
def series_association(self):
# 原 [N,N]
# return F.softmax(self.Q @ self.K.T / math.sqrt(self.d_model), dim=0)
# 现 [batch,N,N],是列方向的softmax?,应该是不对的,得改成行方向的softmax,根据下游的reconstruction来看
return F.softmax(torch.matmul(self.Q,self.K.transpose(1,2)) / math.sqrt(self.d_model), dim=2)
def reconstruction(self):
return torch.matmul(self.S,self.V)
class AnomalyTransformerBlock(nn.Module):
def __init__(self, N, d_model):
super().__init__()
self.N, self.d_model = N, d_model
self.attention = AnomalyAttention(self.N, self.d_model)
self.ln1 = nn.LayerNorm(self.d_model)
self.ff = nn.Sequential(nn.Linear(self.d_model, self.d_model), nn.ReLU())
self.ln2 = nn.LayerNorm(self.d_model)
def forward(self, x):
# x: [batch,N,d_model]
x_identity = x
x = self.attention(x)
z = self.ln1(x + x_identity)
z_identity = z
z = self.ff(z)
z = self.ln2(z + z_identity)
# z: [batch,N,d_model]
return z
class AnomalyTransformer(nn.Module):
def __init__(self,batch_size, N, in_channel, d_model, layers, lambda_):
super().__init__()
self.batch_size = batch_size
self.in_channel = in_channel
self.N = N
self.d_model = d_model
self.input2hidden = nn.Linear(self.in_channel,self.d_model)
self.hidden2output = nn.Linear(self.d_model,self.in_channel)
self.blocks = nn.ModuleList(
[AnomalyTransformerBlock(self.N, self.d_model) for _ in range(layers)]
)
self.output = None
self.lambda_ = lambda_
self.P_layers = []
self.S_layers = []
def to_string(self):
return 'in_channel:%d_N:%d_dmodel:%d_' % (self.in_channel,self.N,self.d_model)
def forward(self, x):
# x: [batch,N,in_channel]
self.P_layers = []
self.S_layers = []
x = self.input2hidden(x)
for idx, block in enumerate(self.blocks):
x = block(x)
# x: [batch,N,d_model]
self.P_layers.append(block.attention.P)
self.S_layers.append(block.attention.S)
self.output = self.hidden2output(x)
# output: [batch,N,in_channel]
return self.output
# def layer_association_discrepancy(self, Pl, Sl, x):
# rowwise_kl = lambda row: (
# F.kl_div(Pl[row, :], Sl[row, :]) + F.kl_div(Sl[row, :], Pl[row, :])
# )
# ad_vector = torch.concat(
# [rowwise_kl(row).unsqueeze(0) for row in range(Pl.shape[0])]
# )
# return ad_vector
# ad_vector: [N]
# def rowwise_kl (self,Pl,Sl,idx,row):
# return F.kl_div(Pl[idx,row, :], Sl[idx,row, :]) + F.kl_div(Sl[idx,row, :], Pl[idx,row, :])
# def layer_association_discrepancy(self, Pl, Sl, x):
# wholetmp=[]
# for idx in range(Pl.shape[0]):
# rowtmp=[]
# for row in range(Pl.shape[1]):
# rowtmp.append(self.rowwise_kl(Pl,Sl,idx,row).unsqueeze(0))
# wholetmp.append(torch.cat(rowtmp))
# ad_vector = torch.cat(
# wholetmp
# ).reshape([-1,Pl.shape[1]])
# #ad_vector: [batch,N]
# return ad_vector
def rowwise_kl(self, row, Pl, Sl, eps=1e-4):
Pl_r = Pl[:,row,:]
Sl_r = Sl[:,row,:]
Pl_r = (Pl_r+ eps) / torch.sum(Pl_r + eps, dim=-1, keepdims=True)
Sl_r = (Sl_r + eps) / torch.sum(Sl_r+ eps, dim=-1, keepdims=True)
'''TODO:改这个函数'''
ret = torch.sum(
F.kl_div( torch.log(Pl_r), Sl_r, reduction='none') + F.kl_div( torch.log(Sl_r), Pl_r, reduction='none'), dim=1
)
return ret
def layer_association_discrepancy(self, Pl, Sl, x):
ad_vector = torch.concat(
[self.rowwise_kl(row, Pl, Sl).unsqueeze(1) for row in range(Pl.shape[1])], dim=1
)
return ad_vector
def association_discrepancy(self, P_list, S_list, x):
ret = (1 / len(P_list)) * sum(
[
self.layer_association_discrepancy(P, S, x)
for P, S in zip(P_list, S_list)
]
)
# ret: [batch,N]
return ret
def loss_function(self, x_hat, P_list, S_list, lambda_, x):
#P_list: [layers,batch,N,N]
#S_list: [layers,batch,N,N]
frob_norm = torch.linalg.matrix_norm(x_hat - x, ord="fro")
ret = frob_norm - (
lambda_
* torch.linalg.norm(self.association_discrepancy(P_list, S_list, x),dim=1, ord=1)
)
return ret.mean()
def min_loss(self, x):
P_list = self.P_layers
S_list = [S.detach() for S in self.S_layers]
# S_list = self.S_layers
lambda_ = -self.lambda_
return self.loss_function(self.output, P_list, S_list, lambda_, x)
def max_loss(self, x):
P_list = [P.detach() for P in self.P_layers]
# P_list = self.P_layers
S_list = self.S_layers
lambda_ = self.lambda_
return self.loss_function(self.output, P_list, S_list, lambda_, x)
def anomaly_score_whole(self, x):
# x:[length,dim]
x = np.array(split_N_pad(x.reshape([-1,1]),self.N))
'''TODO:测试data_slice'''
data = torch.from_numpy(x)
if torch.cuda.is_available():
data = data.cuda()
dataset = TensorDataset(data)
dataloader = DataLoader(dataset, batch_size=min(self.batch_size, len(dataset)), shuffle=False, drop_last=False)
scores=[]
for step, batch in enumerate(dataloader):
batch=batch[0]
score = self.anomaly_score(batch)
scores.append(score)
return torch.cat(scores).flatten()
def anomaly_score(self, x):
# 原 x:[N,in_channel]
output = self.forward(x)
tmp = -self.association_discrepancy(self.P_layers, self.S_layers, x)
ad = F.softmax(
tmp, dim=0
)
assert ad.shape[1] == self.N
# norm = torch.tensor(
# [
# torch.linalg.norm(x[i, :] - self.output[i, :], ord=2)
# for i in range(self.N)
# ]
# )
norm = []
for idx in range(x.shape[0]):
tmp = torch.tensor(
[
torch.linalg.norm(x[idx,i, :] - self.output[idx,i, :], ord=2)
for i in range(self.N)
]
)
norm.append(tmp)
norm = torch.cat(norm).reshape([-1,self.N])
assert norm.shape[1] == self.N
score = torch.mul(ad, norm)
return score
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/datautils.py
================================================
import os
import numpy as np
import pandas as pd
import math
import random
from datetime import datetime
import pickle
from utils import pkl_load, pad_nan_to_target,data_slice,pad_zero_to_target
from scipy.io.arff import loadarff
from sklearn.preprocessing import StandardScaler, MinMaxScaler
def load_UCR(dataset):
train_file = os.path.join('datasets/UCR', dataset, dataset + "_TRAIN.tsv")
test_file = os.path.join('datasets/UCR', dataset, dataset + "_TEST.tsv")
train_df = pd.read_csv(train_file, sep='\t', header=None)
test_df = pd.read_csv(test_file, sep='\t', header=None)
train_array = np.array(train_df)
test_array = np.array(test_df)
# Move the labels to {0, ..., L-1}
labels = np.unique(train_array[:, 0])
transform = {}
for i, l in enumerate(labels):
transform[l] = i
train = train_array[:, 1:].astype(np.float64)
train_labels = np.vectorize(transform.get)(train_array[:, 0])
test = test_array[:, 1:].astype(np.float64)
test_labels = np.vectorize(transform.get)(test_array[:, 0])
# Normalization for non-normalized datasets
# To keep the amplitude information, we do not normalize values over
# individual time series, but on the whole dataset
if dataset not in [
'AllGestureWiimoteX',
'AllGestureWiimoteY',
'AllGestureWiimoteZ',
'BME',
'Chinatown',
'Crop',
'EOGHorizontalSignal',
'EOGVerticalSignal',
'Fungi',
'GestureMidAirD1',
'GestureMidAirD2',
'GestureMidAirD3',
'GesturePebbleZ1',
'GesturePebbleZ2',
'GunPointAgeSpan',
'GunPointMaleVersusFemale',
'GunPointOldVersusYoung',
'HouseTwenty',
'InsectEPGRegularTrain',
'InsectEPGSmallTrain',
'MelbournePedestrian',
'PickupGestureWiimoteZ',
'PigAirwayPressure',
'PigArtPressure',
'PigCVP',
'PLAID',
'PowerCons',
'Rock',
'SemgHandGenderCh2',
'SemgHandMovementCh2',
'SemgHandSubjectCh2',
'ShakeGestureWiimoteZ',
'SmoothSubspace',
'UMD'
]:
return train[..., np.newaxis], train_labels, test[..., np.newaxis], test_labels
mean = np.nanmean(train)
std = np.nanstd(train)
train = (train - mean) / std
test = (test - mean) / std
return train[..., np.newaxis], train_labels, test[..., np.newaxis], test_labels
def load_anomaly(name):
res = pkl_load(f'datasets/{name}.pkl')
return res['all_train_data'], res['all_train_labels'], res['all_train_timestamps'], \
res['all_test_data'], res['all_test_labels'], res['all_test_timestamps'], \
res['delay']
def gen_ano_train_data(all_train_data):
maxl = np.max([ len(all_train_data[k]) for k in all_train_data ])
pretrain_data = []
for k in all_train_data:
train_data = pad_zero_to_target(all_train_data[k], maxl, axis=0)
pretrain_data.append(train_data)
pretrain_data = np.expand_dims(np.stack(pretrain_data), 2)
return pretrain_data
if __name__ == '__main__':
dataset='yahoo'
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = load_anomaly(dataset)
train_data = gen_ano_train_data(all_train_data)
train_data_s = data_slice(train_data, 100)
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/models/__init__.py
================================================
from .encoder import TSEncoder
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/models/anomaly_transformer_model.py
================================================
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
class AnomalyAttention(nn.Module):
def __init__(self, N, d_model):
super(AnomalyAttention, self).__init__()
self.d_model = d_model
self.N = N
self.Wq = nn.Linear(d_model, d_model, bias=False)
self.Wk = nn.Linear(d_model, d_model, bias=False)
self.Wv = nn.Linear(d_model, d_model, bias=False)
self.Ws = nn.Linear(d_model, 1, bias=False)
self.Q = torch.zeros((N, d_model))
self.K = torch.zeros((N, d_model))
self.V = torch.zeros((N, d_model))
self.sigma = torch.zeros((N, 1))
self.P = torch.zeros((N, N))
self.S = torch.zeros((N, N))
def forward(self, x):
# x: N x d_model
self.initialize(x)
self.P = self.prior_association()
self.S = self.series_association()
Z = self.reconstruction() # N x d_model
return Z
def initialize(self, x):
self.Q = self.Wq(x)
self.K = self.Wk(x)
self.V = self.Wv(x)
self.sigma = self.Ws(x)
@staticmethod
def gaussian_kernel(mean, sigma):
normalize = 1 / (math.sqrt(2 * torch.pi) * sigma)
return normalize * torch.exp(-0.5 * (mean / sigma).pow(2))
def prior_association(self):
p = torch.from_numpy(
np.abs(np.indices((self.N, self.N))[0] - np.indices((self.N, self.N))[1])
)
gaussian = self.gaussian_kernel(p.float(), self.sigma)
gaussian /= gaussian.sum(dim=-1).view(-1, 1)
return gaussian
def series_association(self):
return F.softmax((self.Q @ self.K.T) / math.sqrt(self.d_model), dim=0)
def reconstruction(self):
return self.S @ self.V
class AnomalyTransformerBlock(nn.Module):
def __init__(self, N, d_model):
super().__init__()
self.N, self.d_model = N, d_model
self.attention = AnomalyAttention(self.N, self.d_model)
self.ln1 = nn.LayerNorm(self.d_model)
self.ff = nn.Sequential(nn.Linear(self.d_model, self.d_model), nn.ReLU())
self.ln2 = nn.LayerNorm(self.d_model)
def forward(self, x):
x_identity = x
x = self.attention(x)
z = self.ln1(x + x_identity)
z_identity = z
z = self.ff(z)
z = self.ln2(z + z_identity)
return z
class AnomalyTransformer(nn.Module):
def __init__(self, N, in_channel, d_model, layers, lambda_):
super().__init__()
self.in_channel = in_channel
self.N = N
self.d_model = d_model
self.input2hidden = nn.Linear(self.in_channel, self.d_model)
self.blocks = nn.ModuleList(
[AnomalyTransformerBlock(self.N, self.d_model) for _ in range(layers)]
)
self.output = None
self.lambda_ = lambda_
self.P_layers = []
self.S_layers = []
def forward(self, x):
x = self.input2hidden(x)
for idx, block in enumerate(self.blocks):
x = block(x)
self.P_layers.append(block.attention.P)
self.S_layers.append(block.attention.S)
self.output = x # N x d_model
return x
def layer_association_discrepancy(self, Pl, Sl, x):
rowwise_kl = lambda row: (
F.kl_div(Pl[row, :], Sl[row, :]) + F.kl_div(Sl[row, :], Pl[row, :])
)
ad_vector = torch.concat(
[rowwise_kl(row).unsqueeze(0) for row in range(Pl.shape[0])]
)
return ad_vector
def association_discrepancy(self, P_list, S_list, x):
return (1 / len(P_list)) * sum(
[
self.layer_association_discrepancy(P, S, x)
for P, S in zip(P_list, S_list)
]
)
def loss_function(self, x_hat, P_list, S_list, lambda_, x):
frob_norm = torch.linalg.matrix_norm(x_hat - x, ord="fro")
return frob_norm - (
lambda_
* torch.linalg.norm(self.association_discrepancy(P_list, S_list, x), ord=1)
)
def min_loss(self, x):
P_list = self.P_layers
S_list = [S.detach() for S in self.S_layers]
lambda_ = -self.lambda_
return self.loss_function(self.output, P_list, S_list, lambda_, x)
def max_loss(self, x):
P_list = [P.detach() for P in self.P_layers]
S_list = self.S_layers
lambda_ = self.lambda_
return self.loss_function(self.output, P_list, S_list, lambda_, x)
def anomaly_score(self, x):
ad = F.softmax(
-self.association_discrepancy(self.P_layers, self.S_layers, x), dim=0
)
assert ad.shape[0] == self.N
norm = torch.tensor(
[
torch.linalg.norm(x[i, :] - self.output[i, :], ord=2)
for i in range(self.N)
]
)
assert norm.shape[0] == self.N
score = torch.mul(ad, norm)
return score
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/models/dilated_conv.py
================================================
import torch
from torch import nn
import torch.nn.functional as F
import numpy as np
class SamePadConv(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, dilation=1, groups=1):
super().__init__()
self.receptive_field = (kernel_size - 1) * dilation + 1
padding = self.receptive_field // 2
self.conv = nn.Conv1d(
in_channels, out_channels, kernel_size,
padding=padding,
dilation=dilation,
groups=groups
)
self.remove = 1 if self.receptive_field % 2 == 0 else 0
def forward(self, x):
out = self.conv(x)
if self.remove > 0:
out = out[:, :, : -self.remove]
return out
class ConvBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, dilation, final=False):
super().__init__()
self.conv1 = SamePadConv(in_channels, out_channels, kernel_size, dilation=dilation)
self.conv2 = SamePadConv(out_channels, out_channels, kernel_size, dilation=dilation)
self.projector = nn.Conv1d(in_channels, out_channels, 1) if in_channels != out_channels or final else None
def forward(self, x):
residual = x if self.projector is None else self.projector(x)
x = F.gelu(x)
x = self.conv1(x)
x = F.gelu(x)
x = self.conv2(x)
return x + residual
class DilatedConvEncoder(nn.Module):
def __init__(self, in_channels, channels, kernel_size):
super().__init__()
self.net = nn.Sequential(*[
ConvBlock(
channels[i-1] if i > 0 else in_channels,
channels[i],
kernel_size=kernel_size,
dilation=2**i,
final=(i == len(channels)-1)
)
for i in range(len(channels))
])
def forward(self, x):
return self.net(x)
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/models/encoder.py
================================================
import torch
from torch import nn
import torch.nn.functional as F
import numpy as np
from .dilated_conv import DilatedConvEncoder
def generate_continuous_mask(B, T, n=5, l=0.1):
res = torch.full((B, T), True, dtype=torch.bool)
if isinstance(n, float):
n = int(n * T)
n = max(min(n, T // 2), 1)
if isinstance(l, float):
l = int(l * T)
l = max(l, 1)
for i in range(B):
for _ in range(n):
t = np.random.randint(T-l+1)
res[i, t:t+l] = False
return res
def generate_binomial_mask(B, T, p=0.5):
return torch.from_numpy(np.random.binomial(1, p, size=(B, T))).to(torch.bool)
class TSEncoder(nn.Module):
def __init__(self, input_dims, output_dims, hidden_dims=64, depth=10, mask_mode='binomial'):
super().__init__()
self.input_dims = input_dims
self.output_dims = output_dims
self.hidden_dims = hidden_dims
self.mask_mode = mask_mode
self.input_fc = nn.Linear(input_dims, hidden_dims)
self.feature_extractor = DilatedConvEncoder(
hidden_dims,
[hidden_dims] * depth + [output_dims],
kernel_size=3
)
self.repr_dropout = nn.Dropout(p=0.1)
def forward(self, x, mask=None): # x: B x T x input_dims
nan_mask = ~x.isnan().any(axis=-1)
x[~nan_mask] = 0
x = self.input_fc(x) # B x T x Ch
# generate & apply mask
if mask is None:
if self.training:
mask = self.mask_mode
else:
mask = 'all_true'
if mask == 'binomial':
mask = generate_binomial_mask(x.size(0), x.size(1)).to(x.device)
elif mask == 'continuous':
mask = generate_continuous_mask(x.size(0), x.size(1)).to(x.device)
elif mask == 'all_true':
mask = x.new_full((x.size(0), x.size(1)), True, dtype=torch.bool)
elif mask == 'all_false':
mask = x.new_full((x.size(0), x.size(1)), False, dtype=torch.bool)
elif mask == 'mask_last':
mask = x.new_full((x.size(0), x.size(1)), True, dtype=torch.bool)
mask[:, -1] = False
mask &= nan_mask
x[~mask] = 0
# conv encoder
x = x.transpose(1, 2) # B x Ch x T
x = self.repr_dropout(self.feature_extractor(x)) # B x Co x T
x = x.transpose(1, 2) # B x T x Co
return x
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/models/losses.py
================================================
import torch
from torch import nn
import torch.nn.functional as F
def hierarchical_contrastive_loss(z1, z2, alpha=0.5, temporal_unit=0):
loss = torch.tensor(0., device=z1.device)
d = 0
while z1.size(1) > 1:
if alpha != 0:
loss += alpha * instance_contrastive_loss(z1, z2)
if d >= temporal_unit:
if 1 - alpha != 0:
loss += (1 - alpha) * temporal_contrastive_loss(z1, z2)
d += 1
z1 = F.max_pool1d(z1.transpose(1, 2), kernel_size=2).transpose(1, 2)
z2 = F.max_pool1d(z2.transpose(1, 2), kernel_size=2).transpose(1, 2)
if z1.size(1) == 1:
if alpha != 0:
loss += alpha * instance_contrastive_loss(z1, z2)
d += 1
return loss / d
def instance_contrastive_loss(z1, z2):
B, T = z1.size(0), z1.size(1)
if B == 1:
return z1.new_tensor(0.)
z = torch.cat([z1, z2], dim=0) # 2B x T x C
z = z.transpose(0, 1) # T x 2B x C
sim = torch.matmul(z, z.transpose(1, 2)) # T x 2B x 2B
logits = torch.tril(sim, diagonal=-1)[:, :, :-1] # T x 2B x (2B-1)
logits += torch.triu(sim, diagonal=1)[:, :, 1:]
logits = -F.log_softmax(logits, dim=-1)
i = torch.arange(B, device=z1.device)
loss = (logits[:, i, B + i - 1].mean() + logits[:, B + i, i].mean()) / 2
return loss
def temporal_contrastive_loss(z1, z2):
B, T = z1.size(0), z1.size(1)
if T == 1:
return z1.new_tensor(0.)
z = torch.cat([z1, z2], dim=1) # B x 2T x C
sim = torch.matmul(z, z.transpose(1, 2)) # B x 2T x 2T
logits = torch.tril(sim, diagonal=-1)[:, :, :-1] # B x 2T x (2T-1)
logits += torch.triu(sim, diagonal=1)[:, :, 1:]
logits = -F.log_softmax(logits, dim=-1)
t = torch.arange(T, device=z1.device)
loss = (logits[:, t, T + t - 1].mean() + logits[:, T + t, t].mean()) / 2
return loss
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/tasks/__init__.py
================================================
from .anomaly_detection import eval_anomaly_detection, eval_anomaly_detection_coldstart,np_shift,eval_ad_result
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/tasks/anomaly_detection.py
================================================
import numpy as np
import time
from sklearn.metrics import f1_score, precision_score, recall_score
import bottleneck as bn
import pdb
# consider delay threshold and missing segments
def get_range_proba(predict, label, delay=7):
splits = np.where(label[1:] != label[:-1])[0] + 1
is_anomaly = label[0] == 1
new_predict = np.array(predict)
pos = 0
for sp in splits:
if is_anomaly:
if 1 in predict[pos:min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
is_anomaly = not is_anomaly
pos = sp
sp = len(label)
if is_anomaly: # anomaly in the end
if 1 in predict[pos: min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
return new_predict
# set missing = 0
def reconstruct_label(timestamp, label):
timestamp = np.asarray(timestamp, np.int64)
index = np.argsort(timestamp)
timestamp_sorted = np.asarray(timestamp[index])
interval = np.min(np.diff(timestamp_sorted))
label = np.asarray(label, np.int64)
label = np.asarray(label[index])
idx = (timestamp_sorted - timestamp_sorted[0]) // interval
new_label = np.zeros(shape=((timestamp_sorted[-1] - timestamp_sorted[0]) // interval + 1,), dtype=np.int)
new_label[idx] = label
return new_label
def eval_ad_result(test_pred_list, test_labels_list, test_timestamps_list, delay):
labels = []
pred = []
for test_pred, test_labels, test_timestamps in zip(test_pred_list, test_labels_list, test_timestamps_list):
assert test_pred.shape == test_labels.shape == test_timestamps.shape
test_labels = reconstruct_label(test_timestamps, test_labels)
test_pred = reconstruct_label(test_timestamps, test_pred)
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
labels = np.concatenate(labels)
pred = np.concatenate(pred)
return {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred)
}
def np_shift(arr, num, fill_value=np.nan):
result = np.empty_like(arr)
if num > 0:
result[:num] = fill_value
result[num:] = arr[:-num]
elif num < 0:
result[num:] = fill_value
result[:num] = arr[-num:]
else:
result[:] = arr
return result
def eval_anomaly_detection(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay):
t = time.time()
all_train_repr = {}
all_test_repr = {}
all_train_repr_wom = {}
all_test_repr_wom = {}
for k in all_train_data:
print(k)
train_data = all_train_data[k]
test_data = all_test_data[k]
full_repr = model.encode(
np.concatenate([train_data, test_data]).reshape(1, -1, 1),
mask='mask_last',
casual=True,
sliding_length=1,
sliding_padding=200,
batch_size=256
).squeeze()
all_train_repr[k] = full_repr[:len(train_data)] # (n_timestamps, repr-dims)
all_test_repr[k] = full_repr[len(train_data):] # (n_timestamps, repr-dims)
full_repr_wom = model.encode(
np.concatenate([train_data, test_data]).reshape(1, -1, 1),
casual=True,
sliding_length=1,
sliding_padding=200,
batch_size=256
).squeeze()
all_train_repr_wom[k] = full_repr_wom[:len(train_data)] # (n_timestamps, repr-dims)
all_test_repr_wom[k] = full_repr_wom[len(train_data):] # (n_timestamps, repr-dims)
# print(np.shape(all_train_repr[k]))
# print(np.shape(all_test_repr[k]))
# print(np.shape(all_train_repr_wom[k]))
# print(np.shape(all_test_repr_wom[k]))
# print("#####################")
# raise Exception('my personal exception!')
pdb.set_trace()
res_log = []
labels_log = []
timestamps_log = []
for k in all_train_data:
train_data = all_train_data[k]
train_labels = all_train_labels[k]
train_timestamps = all_train_timestamps[k]
test_data = all_test_data[k]
test_labels = all_test_labels[k]
test_timestamps = all_test_timestamps[k]
train_err = np.abs(all_train_repr_wom[k] - all_train_repr[k]).sum(axis=1)
test_err = np.abs(all_test_repr_wom[k] - all_test_repr[k]).sum(axis=1)
ma = np_shift(bn.move_mean(np.concatenate([train_err, test_err]), 21), 1)
train_err_adj = (train_err - ma[:len(train_err)]) / ma[:len(train_err)]
test_err_adj = (test_err - ma[len(train_err):]) / ma[len(train_err):]
train_err_adj = train_err_adj[22:]
thr = np.mean(train_err_adj) + 4 * np.std(train_err_adj)
test_res = (test_err_adj > thr) * 1
for i in range(len(test_res)):
if i >= delay and test_res[i-delay:i].sum() >= 1:
test_res[i] = 0
res_log.append(test_res)
labels_log.append(test_labels)
timestamps_log.append(test_timestamps)
t = time.time() - t
pdb.set_trace()
eval_res = eval_ad_result(res_log, labels_log, timestamps_log, delay)
eval_res['infer_time'] = t
return res_log, eval_res
def eval_anomaly_detection_coldstart(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay):
t = time.time()
all_data = {}
all_repr = {}
all_repr_wom = {}
for k in all_train_data:
all_data[k] = np.concatenate([all_train_data[k], all_test_data[k]])
all_repr[k] = model.encode(
all_data[k].reshape(1, -1, 1),
mask='mask_last',
casual=True,
sliding_length=1,
sliding_padding=200,
batch_size=256
).squeeze()
all_repr_wom[k] = model.encode(
all_data[k].reshape(1, -1, 1),
casual=True,
sliding_length=1,
sliding_padding=200,
batch_size=256
).squeeze()
res_log = []
labels_log = []
timestamps_log = []
for k in all_data:
data = all_data[k]
labels = np.concatenate([all_train_labels[k], all_test_labels[k]])
timestamps = np.concatenate([all_train_timestamps[k], all_test_timestamps[k]])
err = np.abs(all_repr_wom[k] - all_repr[k]).sum(axis=1)
ma = np_shift(bn.move_mean(err, 21), 1)
err_adj = (err - ma) / ma
MIN_WINDOW = len(data) // 10
thr = bn.move_mean(err_adj, len(err_adj), MIN_WINDOW) + 4 * bn.move_std(err_adj, len(err_adj), MIN_WINDOW)
res = (err_adj > thr) * 1
for i in range(len(res)):
if i >= delay and res[i-delay:i].sum() >= 1:
res[i] = 0
res_log.append(res[MIN_WINDOW:])
labels_log.append(labels[MIN_WINDOW:])
timestamps_log.append(timestamps[MIN_WINDOW:])
t = time.time() - t
eval_res = eval_ad_result(res_log, labels_log, timestamps_log, delay)
eval_res['infer_time'] = t
return res_log, eval_res
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/train.py
================================================
import torch
import numpy as np
import argparse
import os
import sys
import time
import datetime
from ts2vec import TS2Vec
import tasks
import pdb
import datautils
from utils import init_dl_program, name_with_datetime, pkl_save, data_dropout
def save_checkpoint_callback(
save_every=1,
unit='epoch'
):
assert unit in ('epoch', 'iter')
def callback(model, loss):
n = model.n_epochs if unit == 'epoch' else model.n_iters
if n % save_every == 0:
model.save(f'{run_dir}/model_{n}.pkl')
return callback
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('dataset', help='The dataset name')
parser.add_argument('run_name', help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
parser.add_argument('--loader', type=str, required=False, default= 'anomaly', help='The data loader used to load the experimental data. This can be set to anomaly or anomaly_coldstart')
parser.add_argument('--gpu', type=int, default=0, help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch-size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--lr', type=float, default=0.001, help='The learning rate (defaults to 0.001)')
parser.add_argument('--repr-dims', type=int, default=320, help='The representation dimension (defaults to 320)')
parser.add_argument('--max-train-length', type=int, default=3000, help='For sequence with a length greater than , it would be cropped into some sequences, each of which has a length less than (defaults to 3000)')
parser.add_argument('--iters', type=int, default=10, help='The number of iterations')
parser.add_argument('--epochs', type=int, default=100, help='The number of epochs')
parser.add_argument('--save-every', type=int, default=1, help='Save the checkpoint every iterations/epochs')
parser.add_argument('--seed', type=int, default=123, help='The random seed')
parser.add_argument('--max-threads', type=int, default=4, help='The maximum allowed number of threads used by this process')
parser.add_argument('--eval', type=bool, default=False, help='Whether to perform evaluation after training')
parser.add_argument('--irregular', type=float, default=0, help='The ratio of missing observations (defaults to 0)')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
device = init_dl_program(args.gpu, seed=args.seed, max_threads=args.max_threads)
print('Loading data... ', end='')
if args.loader == 'anomaly':
task_type = 'anomaly_detection'
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
train_data = datautils.gen_ano_train_data(all_train_data)
elif args.loader == 'anomaly_coldstart':
task_type = 'anomaly_detection_coldstart'
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
train_data, _, _, _ = datautils.load_UCR('FordA')
else:
raise ValueError(f"Unknown loader {args.loader}.")
if args.irregular > 0:
raise ValueError(f"Task type {task_type} is not supported when irregular>0.")
print('done')
config = dict(
batch_size=args.batch_size,
lr=args.lr,
output_dims=args.repr_dims,
max_train_length=args.max_train_length
)
if args.save_every is not None:
unit = 'epoch' if args.epochs is not None else 'iter'
config[f'after_{unit}_callback'] = save_checkpoint_callback(args.save_every, unit)
run_dir = 'training/' + args.dataset + '__' + name_with_datetime(args.run_name)
os.makedirs(run_dir, exist_ok=True)
t = time.time()
model = TS2Vec(
input_dims=train_data.shape[-1],
device=device,
**config
)
# loss_log = model.fit(
# train_data,
# n_epochs=args.epochs,
# n_iters=args.iters,
# verbose=True
# )
model.save(f'{run_dir}/model.pkl')
t = time.time() - t
print(f"\nTraining time: {datetime.timedelta(seconds=t)}")
print("Training time(seconds): ", t)
if args.eval:
if task_type == 'anomaly_detection':
out, eval_res = tasks.eval_anomaly_detection(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay)
elif task_type == 'anomaly_detection_coldstart':
out, eval_res = tasks.eval_anomaly_detection_coldstart(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay)
else:
assert False
pkl_save(f'{run_dir}/out.pkl', out)
pkl_save(f'{run_dir}/eval_res.pkl', eval_res)
print('Evaluation result:', eval_res)
print("Finished.")
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/trainATbatch.py
================================================
import logging
import numpy as np
import torch
from torch.utils.data import DataLoader, TensorDataset, SequentialSampler
from utils import data_slice
import datautils
import pdb
from transformers.optimization import AdamW, get_cosine_schedule_with_warmup
from sklearn.metrics import f1_score
import tasks
from ATmodelbatch import AnomalyTransformer
import time
import bottleneck as bn
import argparse
import os
import pickle
if torch.cuda.is_available():
torch.set_default_tensor_type('torch.cuda.DoubleTensor')
else:
torch.set_default_tensor_type('torch.DoubleTensor')
logger = logging.getLogger(__name__)
class Config:
window_size=100
shuffle=True
epochs=500
warmup_ratio= 0.1
lr= 10e-4
adam_epsilon= 1e-6
batch_size = 512
in_channel=1
dataset_name = "kpi"
d_model=512
layers=3
lambda_=3
save_dir = './save_models'
save_every_epoch = 2
is_train=True
is_eval=True
def train(config,model,all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay):
train_data = datautils.gen_ano_train_data(all_train_data)
config.in_channel = train_data.shape[-1]
train_data = data_slice(train_data,config.window_size)
train_data = torch.from_numpy(train_data)
if torch.cuda.is_available():
train_data = train_data.cuda()
train_dataset = TensorDataset(train_data)
train_dataloader = DataLoader(train_dataset, batch_size=min(config.batch_size, len(train_dataset)), shuffle=config.shuffle, drop_last=True,generator=torch.Generator(device='cuda:0'))
total_steps = int(len(train_dataloader) * config.epochs)
warmup_steps = max(int(total_steps * config.warmup_ratio), 200)
optimizer = AdamW(
model.parameters(),
lr=config.lr,
eps=config.adam_epsilon,
)
scheduler = get_cosine_schedule_with_warmup(
optimizer, num_warmup_steps=warmup_steps, num_training_steps=total_steps
)
print("Total steps: {}".format(total_steps))
print("Warmup steps: {}".format(warmup_steps))
for epoch in range(int(config.epochs)):
print(epoch)
if (epoch+1) % config.save_every_epoch == 0:
path = config.save_dir+'/'+model.to_string()+'_epoch:%d' % (epoch+1)
os.makedirs(path,exist_ok=True)
torch.save(model,path+'/model.pt')
pdb.set_trace()
f1,pre,recall = evaluate(config,epoch+1,model,all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay)
print('epoch:%d\tf1:%f\tp:%f\tr:%f' % (epoch+1,f1,pre,recall))
model.zero_grad()
for step, batch in enumerate(train_dataloader):
batch=batch[0]
model(batch)
min_loss = model.min_loss(batch)
max_loss = model.max_loss(batch)
#print('minloss:%f\tmaxloss:%f' % (min_loss.detach().cpu(),max_loss.detach().cpu()))
optimizer.zero_grad()
min_loss.backward(retain_graph=True)
max_loss.backward()
optimizer.step()
scheduler.step()
def evaluate(config,cur_epoch,model,all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay):
res_log = []
labels_log = []
timestamps_log = []
t = time.time()
for k in all_train_data:
train_data = all_train_data[k]
train_labels = all_train_labels[k]
train_timestamps = all_train_timestamps[k]
train_length = train_labels.shape[0]
test_data = all_test_data[k]
test_labels = all_test_labels[k]
test_timestamps = all_test_timestamps[k]
test_length = test_labels.shape[0]
train_err = model.anomaly_score_whole(train_data).detach().cpu().numpy()
test_err = model.anomaly_score_whole(test_data).detach().cpu().numpy()
train_err = train_err[:train_length]
test_err = test_err[:test_length]
ma = tasks.np_shift(bn.move_mean(np.concatenate([train_err, test_err]), 21), 1)
train_err_adj = (train_err - ma[:len(train_err)]) / ma[:len(train_err)]
test_err_adj = (test_err - ma[len(train_err):]) / ma[len(train_err):]
train_err_adj = train_err_adj[22:]
thr = np.mean(train_err_adj) + 4 * np.std(train_err_adj)
test_res = (test_err_adj > thr) * 1
for i in range(len(test_res)):
if i >= delay and test_res[i-delay:i].sum() >= 1:
test_res[i] = 0
res_log.append(test_res)
labels_log.append(test_labels)
timestamps_log.append(test_timestamps)
t = time.time() - t
eval_res = tasks.eval_ad_result(res_log, labels_log, timestamps_log, delay)
eval_res['infer_time'] = t
'''
eval_res:{'f1':,'p':,'r':,}
'''
'''save_results'''
path = config.save_dir+'/'+model.to_string()+'_epoch:%d' % (cur_epoch)
os.makedirs(path,exist_ok=True)
with open(path+'/res_log.pkl','wb') as f:
pickle.dump(res_log,f)
with open(path+'/eval_res.pkl','wb') as f:
pickle.dump(eval_res,f)
with open(path+'/results.txt','w') as f:
f.write('f1:%f\tp:%f\tr:%f\n' % (eval_res['f1'],eval_res['precision'],eval_res['recall']))
return eval_res['f1'],eval_res['precision'],eval_res['recall']
def main(config):
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(config.dataset_name)
print('data loaded!')
model = AnomalyTransformer(config.batch_size,config.window_size,config.in_channel,config.d_model,config.layers,config.lambda_)
print('model builded!')
print('train start!')
if config.is_train:
model.train()
train(config,model,all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay)
'''save_trained_model'''
path = config.save_dir+'/'+model.to_string()+'_epoch:%d' % (config.epochs)
os.makedirs(path,exist_ok=True)
torch.save(model,path+'/model.pt')
print('train finished! evaluating...')
if config.is_eval:
model.eval()
res_log,eval_res = evaluate(config,config.epochs,model,all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay)
print('evaluate finished!')
if __name__ == "__main__":
config = Config()
main(config)
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/ts2vec.py
================================================
import torch
import torch.nn.functional as F
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
from models import TSEncoder
from models.losses import hierarchical_contrastive_loss
from utils import take_per_row, split_with_nan, centerize_vary_length_series, torch_pad_nan
import math
import pdb
class TS2Vec:
'''The TS2Vec model'''
def __init__(
self,
input_dims,
output_dims=320,
hidden_dims=64,
depth=10,
device='cuda',
lr=0.001,
batch_size=16,
max_train_length=None,
temporal_unit=0,
after_iter_callback=None,
after_epoch_callback=None
):
''' Initialize a TS2Vec model.
Args:
input_dims (int): The input dimension. For a univariate time series, this should be set to 1.
output_dims (int): The representation dimension.
hidden_dims (int): The hidden dimension of the encoder.
depth (int): The number of hidden residual blocks in the encoder.
device (int): The gpu used for training and inference.
lr (int): The learning rate.
batch_size (int): The batch size.
max_train_length (Union[int, NoneType]): The maximum allowed sequence length for training. For sequence with a length greater than , it would be cropped into some sequences, each of which has a length less than .
temporal_unit (int): The minimum unit to perform temporal contrast. When training on a very long sequence, this param helps to reduce the cost of time and memory.
after_iter_callback (Union[Callable, NoneType]): A callback function that would be called after each iteration.
after_epoch_callback (Union[Callable, NoneType]): A callback function that would be called after each epoch.
'''
super().__init__()
self.device = device
self.lr = lr
self.batch_size = batch_size
self.max_train_length = max_train_length
self.temporal_unit = temporal_unit
self._net = TSEncoder(input_dims=input_dims, output_dims=output_dims, hidden_dims=hidden_dims, depth=depth).to(self.device)
self.net = torch.optim.swa_utils.AveragedModel(self._net)
self.net.update_parameters(self._net)
self.after_iter_callback = after_iter_callback
self.after_epoch_callback = after_epoch_callback
self.n_epochs = 0
self.n_iters = 0
def fit(self, train_data, n_epochs=None, n_iters=None, verbose=False):
''' Training the TS2Vec model.
Args:
train_data (numpy.ndarray): The training data. It should have a shape of (n_instance, n_timestamps, n_features). All missing data should be set to NaN.
n_epochs (Union[int, NoneType]): The number of epochs. When this reaches, the training stops.
n_iters (Union[int, NoneType]): The number of iterations. When this reaches, the training stops. If both n_epochs and n_iters are not specified, a default setting would be used that sets n_iters to 200 for a dataset with size <= 100000, 600 otherwise.
verbose (bool): Whether to print the training loss after each epoch.
Returns:
loss_log: a list containing the training losses on each epoch.
'''
assert train_data.ndim == 3
pdb.set_trace()
if n_iters is None and n_epochs is None:
n_iters = 200 if train_data.size <= 100000 else 600 # default param for n_iters
if self.max_train_length is not None:
sections = train_data.shape[1] // self.max_train_length
if sections >= 2:
train_data = np.concatenate(split_with_nan(train_data, sections, axis=1), axis=0)
temporal_missing = np.isnan(train_data).all(axis=-1).any(axis=0)
if temporal_missing[0] or temporal_missing[-1]:
train_data = centerize_vary_length_series(train_data)
train_data = train_data[~np.isnan(train_data).all(axis=2).all(axis=1)]
train_dataset = TensorDataset(torch.from_numpy(train_data).to(torch.float))
train_loader = DataLoader(train_dataset, batch_size=min(self.batch_size, len(train_dataset)), shuffle=True, drop_last=True)
optimizer = torch.optim.AdamW(self._net.parameters(), lr=self.lr)
loss_log = []
while True:
if n_epochs is not None and self.n_epochs >= n_epochs:
break
cum_loss = 0
n_epoch_iters = 0
interrupted = False
for batch in train_loader:
if n_iters is not None and self.n_iters >= n_iters:
interrupted = True
break
x = batch[0] #(batch_size, n_timestamps, n_features)
# print("#####################")
# raise Exception('my personal exception!')
if self.max_train_length is not None and x.size(1) > self.max_train_length:
window_offset = np.random.randint(x.size(1) - self.max_train_length + 1)
x = x[:, window_offset : window_offset + self.max_train_length]
x = x.to(self.device)
ts_l = x.size(1)
crop_l = np.random.randint(low=2 ** (self.temporal_unit + 1), high=ts_l+1)
crop_left = np.random.randint(ts_l - crop_l + 1)
crop_right = crop_left + crop_l
crop_eleft = np.random.randint(crop_left + 1)
crop_eright = np.random.randint(low=crop_right, high=ts_l + 1)
crop_offset = np.random.randint(low=-crop_eleft, high=ts_l - crop_eright + 1, size=x.size(0))
optimizer.zero_grad()
out1 = self._net(take_per_row(x, crop_offset + crop_eleft, crop_right - crop_eleft))
out1 = out1[:, -crop_l:]
out2 = self._net(take_per_row(x, crop_offset + crop_left, crop_eright - crop_left))
out2 = out2[:, :crop_l]
loss = hierarchical_contrastive_loss(
out1,
out2,
temporal_unit=self.temporal_unit
)
loss.backward()
optimizer.step()
self.net.update_parameters(self._net)
cum_loss += loss.item()
n_epoch_iters += 1
self.n_iters += 1
if self.after_iter_callback is not None:
self.after_iter_callback(self, loss.item())
if interrupted:
break
cum_loss /= n_epoch_iters
loss_log.append(cum_loss)
if verbose:
print(f"Epoch #{self.n_epochs}: loss={cum_loss}")
self.n_epochs += 1
if self.after_epoch_callback is not None:
self.after_epoch_callback(self, cum_loss)
return loss_log
def _eval_with_pooling(self, x, mask=None, slicing=None, encoding_window=None):
out = self.net(x.to(self.device, non_blocking=True), mask)
if encoding_window == 'full_series':
if slicing is not None:
out = out[:, slicing]
out = F.max_pool1d(
out.transpose(1, 2),
kernel_size = out.size(1),
).transpose(1, 2)
elif isinstance(encoding_window, int):
out = F.max_pool1d(
out.transpose(1, 2),
kernel_size = encoding_window,
stride = 1,
padding = encoding_window // 2
).transpose(1, 2)
if encoding_window % 2 == 0:
out = out[:, :-1]
if slicing is not None:
out = out[:, slicing]
elif encoding_window == 'multiscale':
p = 0
reprs = []
while (1 << p) + 1 < out.size(1):
t_out = F.max_pool1d(
out.transpose(1, 2),
kernel_size = (1 << (p + 1)) + 1,
stride = 1,
padding = 1 << p
).transpose(1, 2)
if slicing is not None:
t_out = t_out[:, slicing]
reprs.append(t_out)
p += 1
out = torch.cat(reprs, dim=-1)
else:
if slicing is not None:
out = out[:, slicing]
return out.cpu()
def encode(self, data, mask=None, encoding_window=None, casual=False, sliding_length=None, sliding_padding=0, batch_size=None):
''' Compute representations using the model.
Args:
data (numpy.ndarray): This should have a shape of (n_instance, n_timestamps, n_features). All missing data should be set to NaN.
mask (str): The mask used by encoder can be specified with this parameter. This can be set to 'binomial', 'continuous', 'all_true', 'all_false' or 'mask_last'.
encoding_window (Union[str, int]): When this param is specified, the computed representation would the max pooling over this window. This can be set to 'full_series', 'multiscale' or an integer specifying the pooling kernel size.
casual (bool): When this param is set to True, the future informations would not be encoded into representation of each timestamp.
sliding_length (Union[int, NoneType]): The length of sliding window. When this param is specified, a sliding inference would be applied on the time series.
sliding_padding (int): This param specifies the contextual data length used for inference every sliding windows.
batch_size (Union[int, NoneType]): The batch size used for inference. If not specified, this would be the same batch size as training.
Returns:
repr: The representations for data.
'''
assert self.net is not None, 'please train or load a net first'
assert data.ndim == 3
if batch_size is None:
batch_size = self.batch_size
n_samples, ts_l, _ = data.shape
org_training = self.net.training
self.net.eval()
dataset = TensorDataset(torch.from_numpy(data).to(torch.float))
loader = DataLoader(dataset, batch_size=batch_size)
with torch.no_grad():
output = []
for batch in loader:
x = batch[0]
if sliding_length is not None:
reprs = []
if n_samples < batch_size:
calc_buffer = []
calc_buffer_l = 0
for i in range(0, ts_l, sliding_length):
l = i - sliding_padding
r = i + sliding_length + (sliding_padding if not casual else 0)
x_sliding = torch_pad_nan(
x[:, max(l, 0) : min(r, ts_l)],
left=-l if l<0 else 0,
right=r-ts_l if r>ts_l else 0,
dim=1
)
if n_samples < batch_size:
if calc_buffer_l + n_samples > batch_size:
out = self._eval_with_pooling(
torch.cat(calc_buffer, dim=0),
mask,
slicing=slice(sliding_padding, sliding_padding+sliding_length),
encoding_window=encoding_window
)
reprs += torch.split(out, n_samples)
calc_buffer = []
calc_buffer_l = 0
calc_buffer.append(x_sliding)
calc_buffer_l += n_samples
else:
out = self._eval_with_pooling(
x_sliding,
mask,
slicing=slice(sliding_padding, sliding_padding+sliding_length),
encoding_window=encoding_window
)
reprs.append(out)
if n_samples < batch_size:
if calc_buffer_l > 0:
out = self._eval_with_pooling(
torch.cat(calc_buffer, dim=0),
mask,
slicing=slice(sliding_padding, sliding_padding+sliding_length),
encoding_window=encoding_window
)
reprs += torch.split(out, n_samples)
calc_buffer = []
calc_buffer_l = 0
out = torch.cat(reprs, dim=1)
if encoding_window == 'full_series':
out = F.max_pool1d(
out.transpose(1, 2).contiguous(),
kernel_size = out.size(1),
).squeeze(1)
else:
out = self._eval_with_pooling(x, mask, encoding_window=encoding_window)
if encoding_window == 'full_series':
out = out.squeeze(1)
output.append(out)
output = torch.cat(output, dim=0)
self.net.train(org_training)
return output.numpy()
def save(self, fn):
''' Save the model to a file.
Args:
fn (str): filename.
'''
torch.save(self.net.state_dict(), fn)
def load(self, fn):
''' Load the model from a file.
Args:
fn (str): filename.
'''
state_dict = torch.load(fn, map_location=self.device)
self.net.load_state_dict(state_dict)
================================================
FILE: ts_anomaly_detection_methods/anomaly_transformer/utils.py
================================================
import os
import numpy as np
import pickle
import torch
import random
from datetime import datetime
def pkl_save(name, var):
with open(name, 'wb') as f:
pickle.dump(var, f)
def pkl_load(name):
with open(name, 'rb') as f:
return pickle.load(f)
def split_N_pad(series,window_size):
assert len(series.shape)==2
ret=[]
l=series.shape[0]
for i in range(l//window_size):
ret.append(series[i*window_size:(i+1)*window_size,:])
left = l-l//window_size*window_size
'''TODO:pad'''
if left!=0:
p = np.zeros([window_size,series.shape[1]])
p[:left,:]=series[-left:,:]
ret.append(p)
return ret
'''for AT'''
def data_slice(data,window_size):
'''
data : [size,length,dim]
'''
assert len(data.shape)==3
ret=[]
for i in range(data.shape[0]):
series = data[i]
ret.extend(split_N_pad(series,window_size))
return np.array(ret)
def torch_pad_nan(arr, left=0, right=0, dim=0):
if left > 0:
padshape = list(arr.shape)
padshape[dim] = left
arr = torch.cat((torch.full(padshape, np.nan), arr), dim=dim)
if right > 0:
padshape = list(arr.shape)
padshape[dim] = right
arr = torch.cat((arr, torch.full(padshape, np.nan)), dim=dim)
return arr
def pad_nan_to_target(array, target_length, axis=0, both_side=False):
assert array.dtype in [np.float16, np.float32, np.float64]
pad_size = target_length - array.shape[axis]
if pad_size <= 0:
return array
npad = [(0, 0)] * array.ndim
if both_side:
npad[axis] = (pad_size // 2, pad_size - pad_size//2)
else:
npad[axis] = (0, pad_size)
return np.pad(array, pad_width=npad, mode='constant', constant_values=np.nan)
def pad_zero_to_target(array, target_length, axis=0, both_side=False):
assert array.dtype in [np.float16, np.float32, np.float64]
pad_size = target_length - array.shape[axis]
if pad_size <= 0:
return array
npad = [(0, 0)] * array.ndim
if both_side:
npad[axis] = (pad_size // 2, pad_size - pad_size//2)
else:
npad[axis] = (0, pad_size)
return np.pad(array, pad_width=npad, mode='constant', constant_values=0)
def split_with_nan(x, sections, axis=0):
assert x.dtype in [np.float16, np.float32, np.float64]
arrs = np.array_split(x, sections, axis=axis)
target_length = arrs[0].shape[axis]
for i in range(len(arrs)):
arrs[i] = pad_nan_to_target(arrs[i], target_length, axis=axis)
return arrs
def take_per_row(A, indx, num_elem):
all_indx = indx[:,None] + np.arange(num_elem)
return A[torch.arange(all_indx.shape[0])[:,None], all_indx]
def centerize_vary_length_series(x):
prefix_zeros = np.argmax(~np.isnan(x).all(axis=-1), axis=1)
suffix_zeros = np.argmax(~np.isnan(x[:, ::-1]).all(axis=-1), axis=1)
offset = (prefix_zeros + suffix_zeros) // 2 - prefix_zeros
rows, column_indices = np.ogrid[:x.shape[0], :x.shape[1]]
offset[offset < 0] += x.shape[1]
column_indices = column_indices - offset[:, np.newaxis]
return x[rows, column_indices]
def data_dropout(arr, p):
B, T = arr.shape[0], arr.shape[1]
mask = np.full(B*T, False, dtype=np.bool)
ele_sel = np.random.choice(
B*T,
size=int(B*T*p),
replace=False
)
mask[ele_sel] = True
res = arr.copy()
res[mask.reshape(B, T)] = np.nan
return res
def name_with_datetime(prefix='default'):
now = datetime.now()
return prefix + '_' + now.strftime("%Y%m%d_%H%M%S")
def init_dl_program(
device_name,
seed=None,
use_cudnn=True,
deterministic=False,
benchmark=False,
use_tf32=False,
max_threads=None
):
import torch
if max_threads is not None:
torch.set_num_threads(max_threads) # intraop
if torch.get_num_interop_threads() != max_threads:
torch.set_num_interop_threads(max_threads) # interop
try:
import mkl
except:
pass
else:
mkl.set_num_threads(max_threads)
if seed is not None:
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if isinstance(device_name, (str, int)):
device_name = [device_name]
devices = []
for t in reversed(device_name):
t_device = torch.device(t)
devices.append(t_device)
if t_device.type == 'cuda':
assert torch.cuda.is_available()
torch.cuda.set_device(t_device)
if seed is not None:
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
devices.reverse()
torch.backends.cudnn.enabled = use_cudnn
torch.backends.cudnn.deterministic = deterministic
torch.backends.cudnn.benchmark = benchmark
if hasattr(torch.backends.cudnn, 'allow_tf32'):
torch.backends.cudnn.allow_tf32 = use_tf32
torch.backends.cuda.matmul.allow_tf32 = use_tf32
return devices if len(devices) > 1 else devices[0]
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/AT_solver.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import os
import time
import os
import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np
from other_anomaly_baselines.metrics.metrics import *
from tadpak import evaluate
from torch.utils.data import TensorDataset, DataLoader
import torch
from other_anomaly_baselines.models.AnomalyTransformer import AnomalyTransformer
# def to_var(x, volatile=False):
# if torch.cuda.is_available():
# x = x.cuda()
# return Variable(x, volatile=volatile)
class UniLoader_train(object):
def __init__(self, data_set, win_size, step, mode="train"):
self.mode = mode
self.step = step
self.win_size = win_size
self.train = data_set
def __len__(self):
"""
Number of images in the object dataset.
"""
return (self.train.shape[0] - self.win_size) // self.step + 1
def __getitem__(self, index):
index = index * self.step
return np.float32(self.train[index:index + self.win_size])
class UniLoader_test(object):
def __init__(self, data_set, label_set, win_size, step, mode="train"):
self.mode = mode
self.step = step
self.win_size = win_size
self.train = data_set
self.train_labels = label_set
def __len__(self):
"""
Number of images in the object dataset.
"""
return (self.train.shape[0] - self.win_size) // self.step + 1
def __getitem__(self, index):
index = index * self.step
return np.float32(self.train[index:index + self.win_size]), np.float32(self.train_labels[0:self.win_size])
def split_N_pad(series,window_size):
assert len(series.shape)==2
ret=[]
l=series.shape[0]
for i in range(l//window_size):
ret.append(series[i*window_size:(i+1)*window_size,:])
left = l-l//window_size*window_size
'''TODO:pad'''
if left!=0:
p = np.zeros([window_size,series.shape[1]])
p[:left,:]=series[-left:,:]
ret.append(p)
return ret
def mkdir(directory):
if not os.path.exists(directory):
os.makedirs(directory)
def my_kl_loss(p, q):
res = p * (torch.log(p + 0.0001) - torch.log(q + 0.0001))
return torch.mean(torch.sum(res, dim=-1), dim=1)
def adjust_learning_rate(optimizer, epoch, lr_):
lr_adjust = {epoch: lr_ * (0.5 ** ((epoch - 1) // 1))}
if epoch in lr_adjust.keys():
lr = lr_adjust[epoch]
for param_group in optimizer.param_groups:
param_group['lr'] = lr
print('Updating learning rate to {}'.format(lr))
class EarlyStopping:
def __init__(self, patience=7, verbose=False, dataset_name='', delta=0):
self.patience = patience
self.verbose = verbose
self.counter = 0
self.best_score = None
self.best_score2 = None
self.early_stop = False
self.val_loss_min = np.Inf
self.val_loss2_min = np.Inf
self.delta = delta
self.dataset = dataset_name
def __call__(self, val_loss, val_loss2, model, path):
score = -val_loss
score2 = -val_loss2
if self.best_score is None:
self.best_score = score
self.best_score2 = score2
self.save_checkpoint(val_loss, val_loss2, model, path)
elif score < self.best_score + self.delta or score2 < self.best_score2 + self.delta:
self.counter += 1
print(f'EarlyStopping counter: {self.counter} out of {self.patience}')
if self.counter >= self.patience:
self.early_stop = True
else:
self.best_score = score
self.best_score2 = score2
self.save_checkpoint(val_loss, val_loss2, model, path)
self.counter = 0
def save_checkpoint(self, val_loss, val_loss2, model, path):
if self.verbose:
print(f'Validation loss decreased ({self.val_loss_min:.6f} --> {val_loss:.6f}). Saving model ...')
torch.save(model.state_dict(), os.path.join(path, str(self.dataset) + '_checkpoint.pth'))
self.val_loss_min = val_loss
self.val_loss2_min = val_loss2
class Solver(object):
DEFAULTS = {}
def __init__(self, config, train_set, train_loader, val_set, val_loader, test_set, test_loader, dev_cuda):
self.__dict__.update(Solver.DEFAULTS, **config)
self.train_loader = train_loader
self.vali_loader = val_loader
self.test_loader = test_loader
self.device = dev_cuda
self.build_model()
# self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
self.criterion = nn.MSELoss()
def build_model(self):
self.model = AnomalyTransformer(win_size=self.win_size, enc_in=self.input_c, c_out=self.output_c, e_layers=3, cud_device=self.device)
self.optimizer = torch.optim.Adam(self.model.parameters(), lr=self.lr)
# if torch.cuda.is_available():
self.model.to(self.device)
def vali(self, vali_loader):
self.model.eval()
loss_1 = []
loss_2 = []
for i, (input_data, _) in enumerate(vali_loader):
input = input_data.float().to(self.device)
output, series, prior, _ = self.model(input)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
series_loss += (torch.mean(my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach())) + torch.mean(
my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach(),
series[u])))
prior_loss += (torch.mean(
my_kl_loss((prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach())) + torch.mean(
my_kl_loss(series[u].detach(),
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)))))
series_loss = series_loss / len(prior)
prior_loss = prior_loss / len(prior)
rec_loss = self.criterion(output, input)
loss_1.append((rec_loss - self.k * series_loss).item())
loss_2.append((rec_loss + self.k * prior_loss).item())
return np.average(loss_1), np.average(loss_2)
def train(self):
print("======================TRAIN MODE======================")
time_now = time.time()
path = self.model_save_path
if not os.path.exists(path):
os.makedirs(path)
early_stopping = EarlyStopping(patience=3, verbose=True, dataset_name=self.dataset)
train_steps = len(self.train_loader)
for epoch in range(self.num_epochs):
iter_count = 0
loss1_list = []
epoch_time = time.time()
self.model.train()
for i, (input_data, labels) in enumerate(self.train_loader):
self.optimizer.zero_grad()
iter_count += 1
input = input_data.float().to(self.device)
output, series, prior, _ = self.model(input)
# calculate Association discrepancy
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
series_loss += (torch.mean(my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach())) + torch.mean(
my_kl_loss((prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach(),
series[u])))
prior_loss += (torch.mean(my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach())) + torch.mean(
my_kl_loss(series[u].detach(), (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)))))
series_loss = series_loss / len(prior)
prior_loss = prior_loss / len(prior)
rec_loss = self.criterion(output, input)
loss1_list.append((rec_loss - self.k * series_loss).item())
loss1 = rec_loss - self.k * series_loss
loss2 = rec_loss + self.k * prior_loss
if (i + 1) % 100 == 0:
speed = (time.time() - time_now) / iter_count
left_time = speed * ((self.num_epochs - epoch) * train_steps - i)
print('\tspeed: {:.4f}s/iter; left time: {:.4f}s'.format(speed, left_time))
iter_count = 0
time_now = time.time()
# Minimax strategy
loss1.backward(retain_graph=True)
loss2.backward()
self.optimizer.step()
print("Epoch: {} cost time: {}".format(epoch + 1, time.time() - epoch_time))
train_loss = np.average(loss1_list)
vali_loss1, vali_loss2 = self.vali(self.vali_loader)
print(
"Epoch: {0}, Steps: {1} | Train Loss: {2:.7f} Vali Loss: {3:.7f} ".format(
epoch + 1, train_steps, train_loss, vali_loss1))
early_stopping(vali_loss1, vali_loss2, self.model, path)
if early_stopping.early_stop:
print("Early stopping")
break
adjust_learning_rate(self.optimizer, epoch + 1, self.lr)
def test(self, ucr_index=None):
self.model.load_state_dict(
torch.load(
os.path.join(str(self.model_save_path), str(self.dataset) + '_checkpoint.pth')))
self.model.eval()
temperature = 50
print("======================TEST MODE======================")
criterion = nn.MSELoss(reduce=False)
# (1) stastic on the train set
attens_energy = []
for i, (input_data, labels) in enumerate(self.train_loader):
input = input_data.float().to(self.device)
output, series, prior, _ = self.model(input)
loss = torch.mean(criterion(input, output), dim=-1)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
if u == 0:
series_loss = my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss = my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
else:
series_loss += my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss += my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
metric = torch.softmax((-series_loss - prior_loss), dim=-1)
cri = metric * loss
cri = cri.detach().cpu().numpy()
attens_energy.append(cri)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
train_energy = np.array(attens_energy)
# (2) find the threshold
attens_energy = []
for i, (input_data, labels) in enumerate(self.test_loader):
input = input_data.float().to(self.device)
output, series, prior, _ = self.model(input)
loss = torch.mean(criterion(input, output), dim=-1)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
if u == 0:
series_loss = my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss = my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
else:
series_loss += my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss += my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
# Metric
metric = torch.softmax((-series_loss - prior_loss), dim=-1)
cri = metric * loss
cri = cri.detach().cpu().numpy()
attens_energy.append(cri)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
test_energy = np.array(attens_energy)
combined_energy = np.concatenate([train_energy, test_energy], axis=0)
thresh = np.percentile(combined_energy, 100 - self.anormly_ratio)
print("Threshold :", thresh)
# (3) evaluation on the test set
test_labels = []
attens_energy = []
for i, (input_data, labels) in enumerate(self.test_loader):
input = input_data.float().to(self.device)
output, series, prior, _ = self.model(input)
loss = torch.mean(criterion(input, output), dim=-1)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
if u == 0:
series_loss = my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss = my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
else:
series_loss += my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss += my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
metric = torch.softmax((-series_loss - prior_loss), dim=-1)
cri = metric * loss
cri = cri.detach().cpu().numpy()
attens_energy.append(cri)
test_labels.append(labels)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
test_labels = np.concatenate(test_labels, axis=0).reshape(-1)
test_energy = np.array(attens_energy)
test_labels = np.array(test_labels)
pred = (test_energy > thresh).astype(int)
gt = test_labels.astype(int)
print("pred: ", pred.shape)
print("gt: ", gt.shape)
# results_f1_pa_k_10 = evaluate.evaluate(test_energy, test_labels, k=10)
# results_f1_pa_k_50 = evaluate.evaluate(test_energy, test_labels, k=50)
# results_f1_pa_k_90 = evaluate.evaluate(test_energy, test_labels, k=90)
#
# eval_res = {
# 'f1': None,
# 'precision': None,
# 'recall': None,
# "Affiliation precision": None,
# "Affiliation recall": None,
# "R_AUC_ROC": None,
# "R_AUC_PR": None,
# "VUS_ROC": None,
# "VUS_PR": None,
# 'f1_pa_10': results_f1_pa_k_10['best_f1_w_pa'],
# 'f1_pa_50': results_f1_pa_k_50['best_f1_w_pa'],
# 'f1_pa_90': results_f1_pa_k_90['best_f1_w_pa'],
# }
# results_f1_pa_k_10 = evaluate.evaluate(test_energy, test_labels, k=10)
# results_f1_pa_k_50 = evaluate.evaluate(test_energy, test_labels, k=50)
# results_f1_pa_k_90 = evaluate.evaluate(test_energy, test_labels, k=90)
eval_res = {
'f1': None,
'precision': None,
'recall': None,
"Affiliation precision": None,
"Affiliation recall": None,
"R_AUC_ROC": None,
"R_AUC_PR": None,
"VUS_ROC": None,
"VUS_PR": None,
'f1_pa_10': None,
'f1_pa_50': None,
'f1_pa_90': None,
}
if ucr_index == 79 or ucr_index == 108 or ucr_index == 187 or ucr_index == 203:
pass
else:
# # matrix = [self.index]
scores_simple = combine_all_evaluation_scores(pred, gt, test_energy)
for key, value in scores_simple.items():
# matrix.append(value)
if key == 'Affiliation precision':
eval_res["Affiliation precision"] = value
if key == 'Affiliation recall':
eval_res["Affiliation recall"] = value
if key == 'R_AUC_ROC':
eval_res["R_AUC_ROC"] = value
if key == 'R_AUC_PR':
eval_res["R_AUC_PR"] = value
if key == 'VUS_ROC':
eval_res["VUS_ROC"] = value
if key == 'VUS_PR':
eval_res["VUS_PR"] = value
print('{0:21} : {1:0.4f}'.format(key, value))
# detection adjustment: please see this issue for more information https://github.com/thuml/Anomaly-Transformer/issues/14
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
pred = np.array(pred)
gt = np.array(gt)
print("pred: ", pred.shape)
print("gt: ", gt.shape)
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(gt, pred)
precision, recall, f_score, support = precision_recall_fscore_support(gt, pred,
average='binary')
print(
"Accuracy : {:0.4f}, Precision : {:0.4f}, Recall : {:0.4f}, F-score : {:0.4f} ".format(
accuracy, precision,
recall, f_score))
eval_res['f1'] = f_score
eval_res['precision'] = precision
eval_res['recall'] = recall
return eval_res
def train_uni(self):
print("======================TRAIN MODE======================")
time_now = time.time()
path = self.model_save_path
if not os.path.exists(path):
os.makedirs(path)
early_stopping = EarlyStopping(patience=3, verbose=True, dataset_name=self.dataset)
train_steps = len(self.train_loader)
for epoch in range(self.num_epochs):
iter_count = 0
loss1_list = []
epoch_time = time.time()
self.model.train()
for i, input_data in enumerate(self.train_loader):
self.optimizer.zero_grad()
iter_count += 1
# print("type(input_data) = ", type(input_data), len(input_data))
# # input_data = np.array(input_data)
print("type(input_data) = ", type(input_data), input_data.shape)
input = input_data.float().to(self.device)
output, series, prior, _ = self.model(input)
# calculate Association discrepancy
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
series_loss += (torch.mean(my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach())) + torch.mean(
my_kl_loss((prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach(),
series[u])))
prior_loss += (torch.mean(my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach())) + torch.mean(
my_kl_loss(series[u].detach(), (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)))))
series_loss = series_loss / len(prior)
prior_loss = prior_loss / len(prior)
rec_loss = self.criterion(output, input)
loss1_list.append((rec_loss - self.k * series_loss).item())
loss1 = rec_loss - self.k * series_loss
loss2 = rec_loss + self.k * prior_loss
if (i + 1) % 100 == 0:
speed = (time.time() - time_now) / iter_count
left_time = speed * ((self.num_epochs - epoch) * train_steps - i)
print('\tspeed: {:.4f}s/iter; left time: {:.4f}s'.format(speed, left_time))
iter_count = 0
time_now = time.time()
# Minimax strategy
loss1.backward(retain_graph=True)
loss2.backward()
self.optimizer.step()
print("Epoch: {} cost time: {}".format(epoch + 1, time.time() - epoch_time))
train_loss = np.average(loss1_list)
vali_loss1, vali_loss2 = self.vali(self.vali_loader)
print(
"Epoch: {0}, Steps: {1} | Train Loss: {2:.7f}".format(
epoch + 1, train_steps, train_loss))
early_stopping(vali_loss1, vali_loss2, self.model, path)
if early_stopping.early_stop:
print("Early stopping")
break
adjust_learning_rate(self.optimizer, epoch + 1, self.lr)
def test_uni(self, all_train_data, all_test_data, all_test_labels, all_test_timestamps, delay, config):
# self.model.load_state_dict(
# torch.load(
# os.path.join(str(self.model_save_path), str(self.dataset) + '_checkpoint.pth')))
self.model.eval()
temperature = 50
print("======================TEST MODE======================")
criterion = nn.MSELoss(reduce=False)
# (1) stastic on the train set
attens_energy = []
for k in all_train_data:
train_data = all_train_data[k]
train_data = np.array(train_data)
# train_data =
train_data = np.expand_dims(train_data, axis=-1)
train_dataset = UniLoader_train(train_data, config.win_size, 1)
train_loader = DataLoader(dataset=train_dataset,
batch_size=config.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
# train_dataset = TensorDataset(torch.from_numpy(train_data).to(torch.float))
# train_loader = DataLoader(train_dataset, batch_size=min(config.batch_size, len(train_dataset)),
# shuffle=True,
# drop_last=True)
for i, input_data in enumerate(train_loader):
# print("type(input) = ", type(input_data), input_data.shape)
input = input_data.float().to(self.device)
output, series, prior, _ = self.model(input)
loss = torch.mean(criterion(input, output), dim=-1)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
if u == 0:
series_loss = my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss = my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
else:
series_loss += my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss += my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
metric = torch.softmax((-series_loss - prior_loss), dim=-1)
cri = metric * loss
cri = cri.detach().cpu().numpy()
attens_energy.append(cri)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
train_energy = np.array(attens_energy)
# (2) find the threshold
attens_energy = []
for k in all_train_data:
_test_labels = all_test_labels[k]
test_data = all_test_data[k]
test_data = np.array(test_data)
test_data = np.expand_dims(test_data, axis=-1)
test_dataset = UniLoader_test(test_data, _test_labels, config.win_size, 1)
test_loader = DataLoader(dataset=test_dataset,
batch_size=config.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
for i, (input_data, labels) in enumerate(test_loader):
input = input_data.float().to(self.device)
output, series, prior, _ = self.model(input)
loss = torch.mean(criterion(input, output), dim=-1)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
if u == 0:
series_loss = my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss = my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
else:
series_loss += my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss += my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
# Metric
metric = torch.softmax((-series_loss - prior_loss), dim=-1)
cri = metric * loss
cri = cri.detach().cpu().numpy()
attens_energy.append(cri)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
test_energy = np.array(attens_energy)
combined_energy = np.concatenate([train_energy, test_energy], axis=0)
thresh = np.percentile(combined_energy, 100 - self.anormly_ratio)
print("Threshold :", thresh)
# (3) evaluation on the test set
test_labels_list = []
attens_energy = []
for k in all_train_data:
_test_labels = all_test_labels[k]
# test_labels_list.append(_test_labels)
test_data = all_test_data[k]
test_data = np.array(test_data)
test_data = np.expand_dims(test_data, axis=-1)
test_dataset = UniLoader_test(test_data, _test_labels, config.win_size, 1)
test_loader = DataLoader(dataset=test_dataset,
batch_size=config.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
# test_dataset = TensorDataset(torch.from_numpy(test_data).to(torch.float), torch.from_numpy(_test_labels).float())
# test_loader = DataLoader(test_dataset, batch_size=min(config.batch_size, len(test_dataset)),
# shuffle=True,
# drop_last=True)
for i, (input_data, labels) in enumerate(test_loader):
input = input_data.float().to(self.device)
output, series, prior, _ = self.model(input)
loss = torch.mean(criterion(input, output), dim=-1)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
if u == 0:
series_loss = my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss = my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
else:
series_loss += my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss += my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
metric = torch.softmax((-series_loss - prior_loss), dim=-1)
cri = metric * loss
cri = cri.detach().cpu().numpy()
attens_energy.append(cri)
test_labels_list.append(labels)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
test_labels = np.concatenate(test_labels_list, axis=0).reshape(-1)
test_energy = np.array(attens_energy)
test_labels = np.array(test_labels)
# attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
# test_labels = np.concatenate(test_labels_list, axis=0).reshape(-1)
# test_energy = np.array(attens_energy)
# test_labels = np.array(test_labels)
pred = (test_energy > thresh).astype(int)
gt = test_labels.astype(int)
print("pred: ", pred.shape)
print("gt: ", gt.shape)
# results_f1_pa_k_10 = evaluate.evaluate(test_energy, test_labels, k=10)
# results_f1_pa_k_50 = evaluate.evaluate(test_energy, test_labels, k=50)
# results_f1_pa_k_90 = evaluate.evaluate(test_energy, test_labels, k=90)
eval_res = {
'f1': None,
'precision': None,
'recall': None,
"Affiliation precision": None,
"Affiliation recall": None,
"R_AUC_ROC": None,
"R_AUC_PR": None,
"VUS_ROC": None,
"VUS_PR": None,
# 'f1_pa_10': results_f1_pa_k_10['best_f1_w_pa'],
# 'f1_pa_50': results_f1_pa_k_50['best_f1_w_pa'],
# 'f1_pa_90': results_f1_pa_k_90['best_f1_w_pa'],
}
# matrix = [self.index]
scores_simple = combine_all_evaluation_scores(pred, gt, test_energy)
for key, value in scores_simple.items():
# matrix.append(value)
if key == 'Affiliation precision':
eval_res["Affiliation precision"] = value
if key == 'Affiliation recall':
eval_res["Affiliation recall"] = value
if key == 'R_AUC_ROC':
eval_res["R_AUC_ROC"] = value
if key == 'R_AUC_PR':
eval_res["R_AUC_PR"] = value
if key == 'VUS_ROC':
eval_res["VUS_ROC"] = value
if key == 'VUS_PR':
eval_res["VUS_PR"] = value
print('{0:21} : {1:0.4f}'.format(key, value))
# detection adjustment: please see this issue for more information https://github.com/thuml/Anomaly-Transformer/issues/14
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
pred = np.array(pred)
gt = np.array(gt)
print("pred: ", pred.shape)
print("gt: ", gt.shape)
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(gt, pred)
precision, recall, f_score, support = precision_recall_fscore_support(gt, pred,
average='binary')
print(
"Accuracy : {:0.4f}, Precision : {:0.4f}, Recall : {:0.4f}, F-score : {:0.4f} ".format(
accuracy, precision,
recall, f_score))
eval_res['f1'] = f_score
eval_res['precision'] = precision
eval_res['recall'] = recall
return eval_res
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/ATmodelbatch.py
================================================
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
import pdb
import numpy as np
from utils import data_slice, split_N_pad
import time
from torch.utils.data import DataLoader, TensorDataset, SequentialSampler
if torch.cuda.is_available():
torch.set_default_tensor_type('torch.cuda.DoubleTensor')
else:
torch.set_default_tensor_type('torch.DoubleTensor')
class AnomalyAttention(nn.Module):
def __init__(self, N, d_model):
super(AnomalyAttention, self).__init__()
self.d_model = d_model
self.N = N
self.Wq = nn.Linear(d_model, d_model, bias=False)
self.Wk = nn.Linear(d_model, d_model, bias=False)
self.Wv = nn.Linear(d_model, d_model, bias=False)
self.Ws = nn.Linear(d_model, 1, bias=False)
self.Q = self.K = self.V = self.sigma = torch.zeros((N, d_model))
self.P = torch.zeros((N, N))
self.S = torch.zeros((N, N))
def forward(self, x):
# x :[batch,N,d_model]
self.initialize(x)
self.S = self.series_association()
self.P = self.prior_association()
Z = self.reconstruction()
return Z
def initialize(self, x):
self.Q = self.Wq(x)
self.K = self.Wk(x)
self.V = self.Wv(x)
self.sigma = self.Ws(x)
@staticmethod
def gaussian_kernel(mean, sigma):
normalize = 1 / (math.sqrt(2 * torch.pi) * torch.abs(sigma))
return normalize * torch.exp(-0.5 * (mean / sigma).pow(2))
def prior_association(self):
# qwe = torch.from_numpy(
# np.abs(np.indices((self.N, self.N))[0] - np.indices((self.N, self.N))[1])
# ).cuda
qwe = torch.from_numpy(
np.abs(np.indices((self.N, self.N))[0] - np.indices((self.N, self.N))[1])
)
if torch.cuda.is_available():
qwe = qwe.cuda()
# 原 gaussian: [batch,N,N]
# 因为是高斯所以这里行列求和都一样
gaussian = self.gaussian_kernel(qwe.double(), self.sigma)
gaussian /= gaussian.sum(dim=-1).view(-1, self.N, 1)
return gaussian
def series_association(self):
# 原 [N,N]
# return F.softmax(self.Q @ self.K.T / math.sqrt(self.d_model), dim=0)
# 现 [batch,N,N],是列方向的softmax?,应该是不对的,得改成行方向的softmax,根据下游的reconstruction来看
return F.softmax(torch.matmul(self.Q, self.K.transpose(1, 2)) / math.sqrt(self.d_model), dim=2)
def reconstruction(self):
return torch.matmul(self.S, self.V)
class AnomalyTransformerBlock(nn.Module):
def __init__(self, N, d_model):
super().__init__()
self.N, self.d_model = N, d_model
self.attention = AnomalyAttention(self.N, self.d_model)
self.ln1 = nn.LayerNorm(self.d_model)
self.ff = nn.Sequential(nn.Linear(self.d_model, self.d_model), nn.ReLU())
self.ln2 = nn.LayerNorm(self.d_model)
def forward(self, x):
# x: [batch,N,d_model]
x_identity = x
x = self.attention(x)
z = self.ln1(x + x_identity)
z_identity = z
z = self.ff(z)
z = self.ln2(z + z_identity)
# z: [batch,N,d_model]
return z
class AnomalyTransformer(nn.Module):
def __init__(self, batch_size, N, in_channel, d_model, layers, lambda_):
super().__init__()
self.batch_size = batch_size
self.in_channel = in_channel
self.N = N
self.d_model = d_model
self.input2hidden = nn.Linear(self.in_channel, self.d_model)
self.hidden2output = nn.Linear(self.d_model, self.in_channel)
self.blocks = nn.ModuleList(
[AnomalyTransformerBlock(self.N, self.d_model) for _ in range(layers)]
)
self.output = None
self.lambda_ = lambda_
self.P_layers = []
self.S_layers = []
def to_string(self):
return 'in_channel:%d_N:%d_dmodel:%d_' % (self.in_channel, self.N, self.d_model)
def forward(self, x):
# x: [batch,N,in_channel]
self.P_layers = []
self.S_layers = []
x = self.input2hidden(x)
for idx, block in enumerate(self.blocks):
x = block(x)
# x: [batch,N,d_model]
self.P_layers.append(block.attention.P)
self.S_layers.append(block.attention.S)
self.output = self.hidden2output(x)
# output: [batch,N,in_channel]
return self.output
# def layer_association_discrepancy(self, Pl, Sl, x):
# rowwise_kl = lambda row: (
# F.kl_div(Pl[row, :], Sl[row, :]) + F.kl_div(Sl[row, :], Pl[row, :])
# )
# ad_vector = torch.concat(
# [rowwise_kl(row).unsqueeze(0) for row in range(Pl.shape[0])]
# )
# return ad_vector
# ad_vector: [N]
# def rowwise_kl (self,Pl,Sl,idx,row):
# return F.kl_div(Pl[idx,row, :], Sl[idx,row, :]) + F.kl_div(Sl[idx,row, :], Pl[idx,row, :])
# def layer_association_discrepancy(self, Pl, Sl, x):
# wholetmp=[]
# for idx in range(Pl.shape[0]):
# rowtmp=[]
# for row in range(Pl.shape[1]):
# rowtmp.append(self.rowwise_kl(Pl,Sl,idx,row).unsqueeze(0))
# wholetmp.append(torch.cat(rowtmp))
# ad_vector = torch.cat(
# wholetmp
# ).reshape([-1,Pl.shape[1]])
# #ad_vector: [batch,N]
# return ad_vector
def rowwise_kl(self, row, Pl, Sl, eps=1e-4):
Pl_r = Pl[:, row, :]
Sl_r = Sl[:, row, :]
Pl_r = (Pl_r + eps) / torch.sum(Pl_r + eps, dim=-1, keepdims=True)
Sl_r = (Sl_r + eps) / torch.sum(Sl_r + eps, dim=-1, keepdims=True)
'''TODO:改这个函数'''
ret = torch.sum(
F.kl_div(torch.log(Pl_r), Sl_r, reduction='none') + F.kl_div(torch.log(Sl_r), Pl_r, reduction='none'), dim=1
)
return ret
def layer_association_discrepancy(self, Pl, Sl, x):
ad_vector = torch.concat(
[self.rowwise_kl(row, Pl, Sl).unsqueeze(1) for row in range(Pl.shape[1])], dim=1
)
return ad_vector
def association_discrepancy(self, P_list, S_list, x):
ret = (1 / len(P_list)) * sum(
[
self.layer_association_discrepancy(P, S, x)
for P, S in zip(P_list, S_list)
]
)
# ret: [batch,N]
return ret
def loss_function(self, x_hat, P_list, S_list, lambda_, x):
# P_list: [layers,batch,N,N]
# S_list: [layers,batch,N,N]
frob_norm = torch.linalg.matrix_norm(x_hat - x, ord="fro")
ret = frob_norm - (
lambda_
* torch.linalg.norm(self.association_discrepancy(P_list, S_list, x), dim=1, ord=1)
)
return ret.mean()
def min_loss(self, x):
P_list = self.P_layers
S_list = [S.detach() for S in self.S_layers]
# S_list = self.S_layers
lambda_ = -self.lambda_
return self.loss_function(self.output, P_list, S_list, lambda_, x)
def max_loss(self, x):
P_list = [P.detach() for P in self.P_layers]
# P_list = self.P_layers
S_list = self.S_layers
lambda_ = self.lambda_
return self.loss_function(self.output, P_list, S_list, lambda_, x)
def anomaly_score_whole(self, x):
# x:[length,dim]
x = np.array(split_N_pad(x.reshape([-1, 1]), self.N))
'''TODO:测试data_slice'''
data = torch.from_numpy(x)
if torch.cuda.is_available():
data = data.cuda()
dataset = TensorDataset(data)
dataloader = DataLoader(dataset, batch_size=min(self.batch_size, len(dataset)), shuffle=False, drop_last=False)
scores = []
for step, batch in enumerate(dataloader):
batch = batch[0]
score = self.anomaly_score(batch)
scores.append(score)
return torch.cat(scores).flatten()
def anomaly_score(self, x):
# 原 x:[N,in_channel]
output = self.forward(x)
tmp = -self.association_discrepancy(self.P_layers, self.S_layers, x)
ad = F.softmax(
tmp, dim=0
)
assert ad.shape[1] == self.N
# norm = torch.tensor(
# [
# torch.linalg.norm(x[i, :] - self.output[i, :], ord=2)
# for i in range(self.N)
# ]
# )
norm = []
for idx in range(x.shape[0]):
tmp = torch.tensor(
[
torch.linalg.norm(x[idx, i, :] - self.output[idx, i, :], ord=2)
for i in range(self.N)
]
)
norm.append(tmp)
norm = torch.cat(norm).reshape([-1, self.N])
assert norm.shape[1] == self.N
score = torch.mul(ad, norm)
return score
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/README.md
================================================
## README_Anomaly_Detection
### Usage
| ID | Method | Year | Press | Source Code |
| :--: | :----------------------------------------------------------: | :--: | :-------: | :----------------------------------------------------------: |
| 1 | [SPOT](https://dl.acm.org/doi/abs/10.1145/3097983.3098144) | 2017 | KDD | [github_link](https://github.com/Amossys-team/SPOT) |
| 2 | [DSPOT](https://dl.acm.org/doi/abs/10.1145/3097983.3098144) | 2017 | KDD | [github_link](https://github.com/Amossys-team/SPOT) |
| 3 | [LSTM-VAE](https://ieeexplore.ieee.org/abstract/document/8279425) | 2018 | IEEE RA.L | [github_link](https://github.com/SchindlerLiang/VAE-for-Anomaly-Detection) |
| 4 | [DONUT](https://dl.acm.org/doi/abs/10.1145/3178876.3185996) | 2018 | WWW | [github_link](https://github.com/NetManAIOps/donut) |
| 5 | [SR*](https://dl.acm.org/doi/abs/10.1145/3292500.3330680) | 2019 | KDD | - |
| 6 | [AT](https://arxiv.org/abs/2110.02642) | 2022 | ICLR | [github_link](https://github.com/spencerbraun/anomaly_transformer_pytorch) |
| 7 | [TS2Vec](https://www.aaai.org/AAAI22Papers/AAAI-8809.YueZ.pdf) | 2022 | AAAI | [github_link](https://github.com/yuezhihan/ts2vec) |
1. To train and evaluate SPOT/DSPOT on a dataset, set the dataset_name `dataset='yahoo' or 'kpi'`, and then run the following command:
```python
python train_spot.py
python train_dspot.py
```
2. To train and evaluate LSTM-VAE on a dataset, run the following command:
```python
python train_lstm_vae.py --loader --gpu --seed 42 --eval
```
`dataset_name`: The dataset name.
`run_name`: The folder name used to save model, output and evaluation metrics. This can be set to any word.
`loader`: The data loader used to load the experimental data.
`gpu_device_id`: The GPU device's ID. This can be `0,1,2...`
3. To train and evaluate DONUT on a dataset, run the following command:
```python
python train_donut.py --loader --gpu --seed 42 --eval
```
4. The anomaly detection results of the SR are collected from the original [SR](https://dl.acm.org/doi/abs/10.1145/3292500.3330680) article.
5. To train and evaluate AT on a dataset, set hyper_parameters in the file `trainATbatch.py` , and then run the following command:
```python
python trainATbatch.py
```
6. To train and evaluate TS2Vec on a dataset, run the following command:
```python
python train_ts2vec.py --loader --repr-dims 320 --gpu --seed 42 --eval
```
7. To train and evaluate TimesNet on a dataset, run the following command:
```python
python train_timesnet.py ...
```
8. To train and evaluate GPT4TS on a dataset, run the following command:
```python
python train_gpt4ts.py ...
```
9. To train and evaluate DCdetector on a dataset, run the following command:
```python
python train_dcdetector.py ...
```
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/dataset_read_test.py
================================================
import datautils
import numpy as np
from sklearn.metrics import f1_score, precision_score, recall_score
def get_range_proba(predict, label, delay=7):
splits = np.where(label[1:] != label[:-1])[0] + 1
is_anomaly = label[0] == 1
new_predict = np.array(predict)
pos = 0
for sp in splits:
if is_anomaly:
if 1 in predict[pos:min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
is_anomaly = not is_anomaly
pos = sp
sp = len(label)
if is_anomaly: # anomaly in the end
if 1 in predict[pos: min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
return new_predict
# set missing = 0
def reconstruct_label(timestamp, label):
timestamp = np.asarray(timestamp, np.int64)
index = np.argsort(timestamp)
timestamp_sorted = np.asarray(timestamp[index])
interval = np.min(np.diff(timestamp_sorted))
label = np.asarray(label, np.int64)
label = np.asarray(label[index])
idx = (timestamp_sorted - timestamp_sorted[0]) // interval
new_label = np.zeros(shape=((timestamp_sorted[-1] - timestamp_sorted[0]) // interval + 1,), dtype=np.int)
new_label[idx] = label
return new_label
def eval_ad_result(test_pred_list, test_labels_list, test_timestamps_list, delay):
labels = []
pred = []
for test_pred, test_labels, test_timestamps in zip(test_pred_list, test_labels_list, test_timestamps_list):
assert test_pred.shape == test_labels.shape == test_timestamps.shape
test_labels = reconstruct_label(test_timestamps, test_labels)
test_pred = reconstruct_label(test_timestamps, test_pred)
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
labels = np.concatenate(labels)
pred = np.concatenate(pred)
return {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred)
}
dataset = 'kpi' # yahoo, kpi
print('Loading kpi data... ', end='')
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(dataset)
print("type = ", type(all_train_data), type(all_train_labels), type(all_train_timestamps), type(all_test_data))
print("delay = ", delay)
i = 1
for k in all_test_data:
print("i = ", i, ", k = ", k)
print("all_train_data.shape = ", all_train_data[k].shape)
print("all_train_labels.shape = ", all_train_labels[k].shape)
print("all_train_timestamps.shape = ", all_train_timestamps[k].shape)
print("all_test_data.shape = ", all_test_data[k].shape)
print("all_test_labels.shape = ", all_test_labels[k].shape)
print("all_test_timestamps.shape = ", all_test_timestamps[k].shape)
print("all_train_labels[k][:10] = ", all_train_labels[k][:10])
print("all_test_timestamps[k][:10] = ", all_test_timestamps[k][:10])
i = i + 1
break
# dataset = 'yahoo' # yahoo, kpi
# print('Loading yahoo data... ', end='')
# all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(dataset)
#
# print("type = ", type(all_train_data), type(all_train_labels), type(all_train_timestamps), type(all_test_data))
# print("delay = ", delay)
# i = 1
# for k in all_test_data:
# print("i = ", i, ", k = ", k)
# print("all_train_data.shape = ", all_train_data[k].shape)
# print("all_train_labels.shape = ", all_train_labels[k].shape)
# print("all_train_timestamps.shape = ", all_train_timestamps[k].shape)
# print("all_test_data.shape = ", all_test_data[k].shape)
# print("all_test_labels.shape = ", all_test_labels[k].shape)
# print("all_test_timestamps.shape = ", all_test_timestamps[k].shape)
# i = i + 1
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/datautils.py
================================================
import os
import numpy as np
import pandas as pd
import math
import random
from datetime import datetime
import pickle
from utils import pkl_load, pad_nan_to_target
from scipy.io.arff import loadarff
from sklearn.preprocessing import StandardScaler, MinMaxScaler
def load_UCR(dataset):
train_file = os.path.join('datasets/UCR', dataset, dataset + "_TRAIN.tsv")
test_file = os.path.join('datasets/UCR', dataset, dataset + "_TEST.tsv")
train_df = pd.read_csv(train_file, sep='\t', header=None)
test_df = pd.read_csv(test_file, sep='\t', header=None)
train_array = np.array(train_df)
test_array = np.array(test_df)
# Move the labels to {0, ..., L-1}
labels = np.unique(train_array[:, 0])
transform = {}
for i, l in enumerate(labels):
transform[l] = i
train = train_array[:, 1:].astype(np.float64)
train_labels = np.vectorize(transform.get)(train_array[:, 0])
test = test_array[:, 1:].astype(np.float64)
test_labels = np.vectorize(transform.get)(test_array[:, 0])
# Normalization for non-normalized datasets
# To keep the amplitude information, we do not normalize values over
# individual time series, but on the whole dataset
if dataset not in [
'AllGestureWiimoteX',
'AllGestureWiimoteY',
'AllGestureWiimoteZ',
'BME',
'Chinatown',
'Crop',
'EOGHorizontalSignal',
'EOGVerticalSignal',
'Fungi',
'GestureMidAirD1',
'GestureMidAirD2',
'GestureMidAirD3',
'GesturePebbleZ1',
'GesturePebbleZ2',
'GunPointAgeSpan',
'GunPointMaleVersusFemale',
'GunPointOldVersusYoung',
'HouseTwenty',
'InsectEPGRegularTrain',
'InsectEPGSmallTrain',
'MelbournePedestrian',
'PickupGestureWiimoteZ',
'PigAirwayPressure',
'PigArtPressure',
'PigCVP',
'PLAID',
'PowerCons',
'Rock',
'SemgHandGenderCh2',
'SemgHandMovementCh2',
'SemgHandSubjectCh2',
'ShakeGestureWiimoteZ',
'SmoothSubspace',
'UMD'
]:
return train[..., np.newaxis], train_labels, test[..., np.newaxis], test_labels
mean = np.nanmean(train)
std = np.nanstd(train)
train = (train - mean) / std
test = (test - mean) / std
return train[..., np.newaxis], train_labels, test[..., np.newaxis], test_labels
def load_anomaly(name):
res = pkl_load(f'datasets/{name}.pkl')
return res['all_train_data'], res['all_train_labels'], res['all_train_timestamps'], \
res['all_test_data'], res['all_test_labels'], res['all_test_timestamps'], \
res['delay']
def gen_ano_train_data(all_train_data):
''' Get the anomaly train data.
Args:
all_train_data(dict): all_train_data[k] (numpy.ndarray) with the shape (n_timestamps).
Returns:
pretrain_data (numpy.ndarray): padding with 'nan', the shape is (n_instance, n_timestamps, n_features).
'''
maxl = np.max([ len(all_train_data[k]) for k in all_train_data ])
pretrain_data = []
for k in all_train_data:
train_data = pad_nan_to_target(all_train_data[k], maxl, axis=0)
pretrain_data.append(train_data)
pretrain_data = np.expand_dims(np.stack(pretrain_data), 2)
return pretrain_data
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/dcdetector_solver.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import os
import time
# from utils.utils import *
from other_anomaly_baselines.models.DCdetector import DCdetector
from other_anomaly_baselines.datasets.data_loader import get_loader_segment
from einops import rearrange
from other_anomaly_baselines.metrics.metrics import *
import warnings
from tadpak import evaluate
from torch.utils.data import TensorDataset, DataLoader
warnings.filterwarnings('ignore')
class UniLoader_train(object):
def __init__(self, data_set, win_size, step, mode="train"):
self.mode = mode
self.step = step
self.win_size = win_size
self.train = data_set
def __len__(self):
"""
Number of images in the object dataset.
"""
return (self.train.shape[0] - self.win_size) // self.step + 1
def __getitem__(self, index):
index = index * self.step
return np.float32(self.train[index:index + self.win_size])
class UniLoader_test(object):
def __init__(self, data_set, label_set, win_size, step, mode="train"):
self.mode = mode
self.step = step
self.win_size = win_size
self.train = data_set
self.train_labels = label_set
def __len__(self):
"""
Number of images in the object dataset.
"""
return (self.train.shape[0] - self.win_size) // self.step + 1
def __getitem__(self, index):
index = index * self.step
return np.float32(self.train[index:index + self.win_size]), np.float32(self.train_labels[0:self.win_size])
def my_kl_loss(p, q):
res = p * (torch.log(p + 0.0001) - torch.log(q + 0.0001))
return torch.mean(torch.sum(res, dim=-1), dim=1)
def adjust_learning_rate(optimizer, epoch, lr_):
lr_adjust = {epoch: lr_ * (0.5 ** ((epoch - 1) // 1))}
if epoch in lr_adjust.keys():
lr = lr_adjust[epoch]
for param_group in optimizer.param_groups:
param_group['lr'] = lr
class EarlyStopping:
def __init__(self, patience=7, verbose=False, dataset_name='', delta=0, index=0):
self.patience = patience
self.verbose = verbose
self.counter = 0
self.best_score = None
self.best_score2 = None
self.early_stop = False
self.val_loss_min = np.Inf
self.val_loss2_min = np.Inf
self.delta = delta
self.dataset = dataset_name
self.index = index
def __call__(self, val_loss, val_loss2, model, path):
score = -val_loss
score2 = -val_loss2
if self.best_score is None:
self.best_score = score
self.best_score2 = score2
self.save_checkpoint(val_loss, val_loss2, model, path)
elif score < self.best_score + self.delta or score2 < self.best_score2 + self.delta:
self.counter += 1
if self.counter >= self.patience:
self.early_stop = True
else:
self.best_score = score
self.best_score2 = score2
self.save_checkpoint(val_loss, val_loss2, model, path)
self.counter = 0
def save_checkpoint(self, val_loss, val_loss2, model, path):
print("os.path.join(path, str(self.dataset) + '_checkpoint.pth') = ", os.path.join(path, str(self.dataset) + '_checkpoint.pth'))
torch.save(model.state_dict(), os.path.join(path, str(self.dataset) + str(self.index) +'_checkpoint.pth'))
self.val_loss_min = val_loss
self.val_loss2_min = val_loss2
class Solver(object):
DEFAULTS = {}
def __init__(self, config, multi=True):
self.__dict__.update(Solver.DEFAULTS, **config)
if multi:
self.train_loader, _ = get_loader_segment(self.index, self.data_path + self.dataset, batch_size=self.batch_size,
win_size=self.win_size, mode='train', dataset=self.dataset, )
self.vali_loader, _ = get_loader_segment(self.index, self.data_path + self.dataset, batch_size=self.batch_size,
win_size=self.win_size, mode='val', dataset=self.dataset)
self.test_loader, _ = get_loader_segment(self.index, self.data_path + self.dataset, batch_size=self.batch_size,
win_size=self.win_size, mode='test', dataset=self.dataset)
self.thre_loader, _ = get_loader_segment(self.index, self.data_path + self.dataset, batch_size=self.batch_size,
win_size=self.win_size, mode='thre', dataset=self.dataset)
else:
self.train_loader, _ = None, None
self.vali_loader, _ = None, None
self.test_loader, _ = None, None
self.thre_loader, _ = None, None
self.build_model()
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
if self.loss_fuc == 'MAE':
self.criterion = nn.L1Loss()
elif self.loss_fuc == 'MSE':
self.criterion = nn.MSELoss()
def build_model(self):
self.model = DCdetector(win_size=self.win_size, enc_in=self.input_c, c_out=self.output_c, n_heads=self.n_heads,
d_model=self.d_model, e_layers=self.e_layers, patch_size=self.patch_size,
channel=self.input_c)
if torch.cuda.is_available():
self.model.cuda()
self.optimizer = torch.optim.Adam(self.model.parameters(), lr=self.lr)
def vali(self, vali_loader):
self.model.eval()
loss_1 = []
loss_2 = []
for i, (input_data, _) in enumerate(vali_loader):
input = input_data.float().to(self.device)
series, prior = self.model(input)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
series_loss += (torch.mean(my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach())) + torch.mean(
my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach(),
series[u])))
prior_loss += (torch.mean(
my_kl_loss((prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach())) + torch.mean(
my_kl_loss(series[u].detach(),
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)))))
series_loss = series_loss / len(prior)
prior_loss = prior_loss / len(prior)
loss_1.append((prior_loss - series_loss).item())
return np.average(loss_1), np.average(loss_2)
def train(self):
time_now = time.time()
path = self.model_save_path
if not os.path.exists(path):
os.makedirs(path)
early_stopping = EarlyStopping(patience=5, verbose=True, dataset_name=self.dataset, index=self.index)
train_steps = len(self.train_loader)
for epoch in range(self.num_epochs):
iter_count = 0
epoch_time = time.time()
self.model.train()
# for i, data in enumerate(self.train_loader):
# print(data)
# break
for i, (input_data, labels) in enumerate(self.train_loader):
self.optimizer.zero_grad()
iter_count += 1
input = input_data.float().to(self.device)
# print("input = ", type(input), input.shape)
series, prior = self.model(input)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
series_loss += (torch.mean(my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach())) + torch.mean(
my_kl_loss((prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach(),
series[u])))
prior_loss += (torch.mean(my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach())) + torch.mean(
my_kl_loss(series[u].detach(), (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)))))
series_loss = series_loss / len(prior)
prior_loss = prior_loss / len(prior)
loss = prior_loss - series_loss
if (i + 1) % 100 == 0:
speed = (time.time() - time_now) / iter_count
left_time = speed * ((self.num_epochs - epoch) * train_steps - i)
print('\tspeed: {:.4f}s/iter; left time: {:.4f}s'.format(speed, left_time))
iter_count = 0
time_now = time.time()
loss.backward()
self.optimizer.step()
vali_loss1, vali_loss2 = self.vali(self.vali_loader)
print(
"Epoch: {0}, Cost time: {1:.3f}s ".format(
epoch + 1, time.time() - epoch_time))
early_stopping(vali_loss1, vali_loss2, self.model, path)
if early_stopping.early_stop:
break
adjust_learning_rate(self.optimizer, epoch + 1, self.lr)
def test(self, ucr_index=None):
self.model.load_state_dict(
torch.load(
os.path.join(str(self.model_save_path), str(self.dataset) + str(self.index) + '_checkpoint.pth')))
self.model.eval()
temperature = 50
# (1) stastic on the train set
attens_energy = []
for i, (input_data, labels) in enumerate(self.train_loader):
input = input_data.float().to(self.device)
series, prior = self.model(input)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
if u == 0:
series_loss = my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss = my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
else:
series_loss += my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss += my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
metric = torch.softmax((-series_loss - prior_loss), dim=-1)
cri = metric.detach().cpu().numpy()
attens_energy.append(cri)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
train_energy = np.array(attens_energy)
# (2) find the threshold
attens_energy = []
for i, (input_data, labels) in enumerate(self.thre_loader):
input = input_data.float().to(self.device)
series, prior = self.model(input)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
if u == 0:
series_loss = my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss = my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
else:
series_loss += my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss += my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
metric = torch.softmax((-series_loss - prior_loss), dim=-1)
cri = metric.detach().cpu().numpy()
attens_energy.append(cri)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
test_energy = np.array(attens_energy)
combined_energy = np.concatenate([train_energy, test_energy], axis=0)
thresh = np.percentile(combined_energy, 100 - self.anormly_ratio)
print("Threshold :", thresh)
# (3) evaluation on the test set
test_labels = []
attens_energy = []
for i, (input_data, labels) in enumerate(self.thre_loader):
input = input_data.float().to(self.device)
series, prior = self.model(input)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
if u == 0:
series_loss = my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss = my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
else:
series_loss += my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss += my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
metric = torch.softmax((-series_loss - prior_loss), dim=-1)
cri = metric.detach().cpu().numpy()
attens_energy.append(cri)
test_labels.append(labels)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
test_labels = np.concatenate(test_labels, axis=0).reshape(-1)
test_energy = np.array(attens_energy)
test_labels = np.array(test_labels)
pred = (test_energy > thresh).astype(int)
gt = test_labels.astype(int)
# labels = np.asarray(labels_log, np.int64)[0]
# print("test_energy.shape = ", test_energy.shape, test_labels.shape)
# print("test_energy.shape = ", test_energy[:10])
# print("test_labels.shape = ", test_labels[:10])
index_list = [38, 54, 71, 72, 79, 85, 88, 108, 146, 162, 179, 180, 187, 193, 196, 203, 212, 229, 232]
if ucr_index in index_list:
eval_res = {
'f1': None,
'precision': None,
'recall': None,
"Affiliation precision": None,
"Affiliation recall": None,
"R_AUC_ROC": None,
"R_AUC_PR": None,
"VUS_ROC": None,
"VUS_PR": None,
'f1_pa_10': None,
'f1_pa_50': None,
'f1_pa_90': None,
}
else:
results_f1_pa_k_10 = evaluate.evaluate(test_energy, test_labels, k=10)
results_f1_pa_k_50 = evaluate.evaluate(test_energy, test_labels, k=50)
results_f1_pa_k_90 = evaluate.evaluate(test_energy, test_labels, k=90)
eval_res = {
'f1': None,
'precision': None,
'recall': None,
"Affiliation precision": None,
"Affiliation recall": None,
"R_AUC_ROC": None,
"R_AUC_PR": None,
"VUS_ROC": None,
"VUS_PR": None,
'f1_pa_10': results_f1_pa_k_10['best_f1_w_pa'],
'f1_pa_50': results_f1_pa_k_50['best_f1_w_pa'],
'f1_pa_90': results_f1_pa_k_90['best_f1_w_pa'],
}
matrix = [self.index]
scores_simple = combine_all_evaluation_scores(pred, gt, test_energy)
for key, value in scores_simple.items():
matrix.append(value)
if key == 'Affiliation precision':
eval_res["Affiliation precision"] = value
if key == 'Affiliation recall':
eval_res["Affiliation recall"] = value
if key == 'R_AUC_ROC':
eval_res["R_AUC_ROC"] = value
if key == 'R_AUC_PR':
eval_res["R_AUC_PR"] = value
if key == 'VUS_ROC':
eval_res["VUS_ROC"] = value
if key == 'VUS_PR':
eval_res["VUS_PR"] = value
print('{0:21} : {1:0.4f}'.format(key, value))
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
pred = np.array(pred)
gt = np.array(gt)
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(gt, pred)
precision, recall, f_score, support = precision_recall_fscore_support(gt, pred, average='binary')
print(
"Accuracy : {:0.4f}, Precision : {:0.4f}, Recall : {:0.4f}, F-score : {:0.4f} ".format(accuracy, precision,
recall, f_score))
# if self.data_path == 'UCR' or 'UCR_AUG':
# import csv
# with open('result_dc/' + self.dataset + '.csv', 'a+') as f:
# writer = csv.writer(f)
# writer.writerow(matrix)
eval_res['f1'] = f_score
eval_res['precision'] = precision
eval_res['recall'] = recall
return eval_res
def vali_uni(self, vali_loader):
self.model.eval()
loss_1 = []
loss_2 = []
for i, input_data in enumerate(vali_loader):
input = input_data.float().to(self.device)
series, prior = self.model(input)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
series_loss += (torch.mean(my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach())) + torch.mean(
my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach(),
series[u])))
prior_loss += (torch.mean(
my_kl_loss((prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach())) + torch.mean(
my_kl_loss(series[u].detach(),
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)))))
series_loss = series_loss / len(prior)
prior_loss = prior_loss / len(prior)
loss_1.append((prior_loss - series_loss).item())
return np.average(loss_1), np.average(loss_2)
def train_uni(self):
time_now = time.time()
path = self.model_save_path
if not os.path.exists(path):
os.makedirs(path)
early_stopping = EarlyStopping(patience=5, verbose=True, dataset_name=self.dataset, index=self.index)
train_steps = len(self.train_loader)
for epoch in range(self.num_epochs):
iter_count = 0
epoch_time = time.time()
self.model.train()
# for i, data in enumerate(self.train_loader):
# print(data)
# break
for i, input_data in enumerate(self.train_loader):
self.optimizer.zero_grad()
iter_count += 1
input = input_data.float().to(self.device)
# print("input = ", type(input), input.shape)
series, prior = self.model(input)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
series_loss += (torch.mean(my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach())) + torch.mean(
my_kl_loss((prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach(),
series[u])))
prior_loss += (torch.mean(my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach())) + torch.mean(
my_kl_loss(series[u].detach(), (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)))))
series_loss = series_loss / len(prior)
prior_loss = prior_loss / len(prior)
loss = prior_loss - series_loss
if (i + 1) % 100 == 0:
speed = (time.time() - time_now) / iter_count
left_time = speed * ((self.num_epochs - epoch) * train_steps - i)
print('\tspeed: {:.4f}s/iter; left time: {:.4f}s'.format(speed, left_time))
iter_count = 0
time_now = time.time()
loss.backward()
self.optimizer.step()
vali_loss1, vali_loss2 = self.vali_uni(self.vali_loader)
print(
"Epoch: {0}, Cost time: {1:.3f}s ".format(
epoch + 1, time.time() - epoch_time))
early_stopping(vali_loss1, vali_loss2, self.model, path)
if early_stopping.early_stop:
break
adjust_learning_rate(self.optimizer, epoch + 1, self.lr)
def test_uni(self, all_train_data, all_test_data, all_test_labels, all_test_timestamps, delay, config):
self.model.load_state_dict(
torch.load(
os.path.join(str(self.model_save_path), str(self.dataset) + str(self.index) + '_checkpoint.pth')))
self.model.eval()
temperature = 50
# (1) stastic on the train set
attens_energy = []
for k in all_train_data:
train_data = all_train_data[k]
train_data = np.array(train_data)
# train_data =
train_data = np.expand_dims(train_data, axis=-1)
train_dataset = UniLoader_train(train_data, config.win_size, 1)
train_loader = DataLoader(dataset=train_dataset,
batch_size=config.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
for i, input_data in enumerate(train_loader):
input = input_data.float().to(self.device)
series, prior = self.model(input)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
if u == 0:
series_loss = my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss = my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
else:
series_loss += my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss += my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
metric = torch.softmax((-series_loss - prior_loss), dim=-1)
cri = metric.detach().cpu().numpy()
attens_energy.append(cri)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
train_energy = np.array(attens_energy)
# (2) find the threshold
attens_energy = []
for k in all_train_data:
_test_labels = all_test_labels[k]
test_data = all_test_data[k]
test_data = np.array(test_data)
test_data = np.expand_dims(test_data, axis=-1)
test_dataset = UniLoader_test(test_data, _test_labels, config.win_size, 1)
test_loader = DataLoader(dataset=test_dataset,
batch_size=config.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
for i, (input_data, labels) in enumerate(test_loader):
input = input_data.float().to(self.device)
series, prior = self.model(input)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
if u == 0:
series_loss = my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss = my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
else:
series_loss += my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss += my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
metric = torch.softmax((-series_loss - prior_loss), dim=-1)
cri = metric.detach().cpu().numpy()
attens_energy.append(cri)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
test_energy = np.array(attens_energy)
combined_energy = np.concatenate([train_energy, test_energy], axis=0)
thresh = np.percentile(combined_energy, 100 - self.anormly_ratio)
print("Threshold :", thresh)
# (3) evaluation on the test set
test_labels = []
attens_energy = []
for k in all_train_data:
_test_labels = all_test_labels[k]
test_data = all_test_data[k]
test_data = np.array(test_data)
test_data = np.expand_dims(test_data, axis=-1)
test_dataset = UniLoader_test(test_data, _test_labels, config.win_size, 1)
test_loader = DataLoader(dataset=test_dataset,
batch_size=config.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
for i, (input_data, labels) in enumerate(test_loader):
input = input_data.float().to(self.device)
series, prior = self.model(input)
series_loss = 0.0
prior_loss = 0.0
for u in range(len(prior)):
if u == 0:
series_loss = my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss = my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
else:
series_loss += my_kl_loss(series[u], (
prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)).detach()) * temperature
prior_loss += my_kl_loss(
(prior[u] / torch.unsqueeze(torch.sum(prior[u], dim=-1), dim=-1).repeat(1, 1, 1,
self.win_size)),
series[u].detach()) * temperature
metric = torch.softmax((-series_loss - prior_loss), dim=-1)
cri = metric.detach().cpu().numpy()
attens_energy.append(cri)
test_labels.append(labels)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
test_labels = np.concatenate(test_labels, axis=0).reshape(-1)
test_energy = np.array(attens_energy)
test_labels = np.array(test_labels)
pred = (test_energy > thresh).astype(int)
gt = test_labels.astype(int)
# labels = np.asarray(labels_log, np.int64)[0]
# print("test_energy.shape = ", test_energy.shape, test_labels.shape)
# print("test_energy.shape = ", test_energy[:10])
# print("test_labels.shape = ", test_labels[:10])
# results_f1_pa_k_10 = evaluate.evaluate(test_energy, test_labels, k=10)
# results_f1_pa_k_50 = evaluate.evaluate(test_energy, test_labels, k=50)
# results_f1_pa_k_90 = evaluate.evaluate(test_energy, test_labels, k=90)
eval_res = {
'f1': None,
'precision': None,
'recall': None,
"Affiliation precision": None,
"Affiliation recall": None,
"R_AUC_ROC": None,
"R_AUC_PR": None,
"VUS_ROC": None,
"VUS_PR": None,
# 'f1_pa_10': results_f1_pa_k_10['best_f1_w_pa'],
# 'f1_pa_50': results_f1_pa_k_50['best_f1_w_pa'],
# 'f1_pa_90': results_f1_pa_k_90['best_f1_w_pa'],
}
# matrix = [self.index]
min_len = min(min(pred.shape[0], gt.shape[0]), test_energy.shape[0])
scores_simple = combine_all_evaluation_scores(pred[:min_len], gt[:min_len], test_energy[:min_len])
for key, value in scores_simple.items():
# matrix.append(value)
if key == 'Affiliation precision':
eval_res["Affiliation precision"] = value
if key == 'Affiliation recall':
eval_res["Affiliation recall"] = value
if key == 'R_AUC_ROC':
eval_res["R_AUC_ROC"] = value
if key == 'R_AUC_PR':
eval_res["R_AUC_PR"] = value
if key == 'VUS_ROC':
eval_res["VUS_ROC"] = value
if key == 'VUS_PR':
eval_res["VUS_PR"] = value
print('{0:21} : {1:0.4f}'.format(key, value))
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
pred = np.array(pred)
gt = np.array(gt)
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(gt, pred)
precision, recall, f_score, support = precision_recall_fscore_support(gt[:min_len], pred[:min_len], average='binary')
print(
"Accuracy : {:0.4f}, Precision : {:0.4f}, Recall : {:0.4f}, F-score : {:0.4f} ".format(accuracy, precision,
recall, f_score))
# if self.data_path == 'UCR' or 'UCR_AUG':
# import csv
# with open('result_dc/' + self.dataset + '.csv', 'a+') as f:
# writer = csv.writer(f)
# writer.writerow(matrix)
eval_res['f1'] = f_score
eval_res['precision'] = precision
eval_res['recall'] = recall
return eval_res
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/donut.py
================================================
import torch
import torch.nn.functional as F
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
from models.donut_model import DONUT_Model
from utils import split_with_nan, centerize_vary_length_series
import math
import time
from tasks.anomaly_detection import eval_ad_result, np_shift
import bottleneck as bn
from sklearn.metrics import f1_score, precision_score, recall_score
from other_anomaly_baselines.metrics.affiliation.metrics import pr_from_events
from other_anomaly_baselines.metrics.vus.metrics import get_range_vus_roc
from other_anomaly_baselines.metrics.affiliation.generics import convert_vector_to_events
from tadpak import evaluate
def adjustment(gt, pred):
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
return gt, pred
class DONUT:
def __init__(
self,
input_dims,
latent_dim=100,
hidden_dim=3,
device='cuda',
lr=0.001,
batch_size=8,
z_kld_weight=0.1,
x_kld_weight=0.1,
max_train_length=None,
after_iter_callback=None,
after_epoch_callback=None
):
super().__init__()
self.device = device
self.lr = lr
self.batch_size = batch_size
self.z_kld_weight = z_kld_weight
self.x_kld_weight = x_kld_weight
self.max_train_length = max_train_length
self.input_dims = input_dims
self.net = DONUT_Model(in_channel=input_dims, latent_dim=latent_dim, hidden_dim=hidden_dim).to(self.device)
self.after_iter_callback = after_iter_callback
self.after_epoch_callback = after_epoch_callback
self.n_epochs = 0
self.n_iters = 0
def train(self, train_data, n_epochs=None, n_iters=None, verbose=False):
'''
Args:
train_data (numpy.ndarray): The training data. It should have a shape of (n_instance, n_timestamps, n_features). All missing data should be set to NaN.
n_epochs (Union[int, NoneType]): The number of epochs. When this reaches, the training stops.
n_iters (Union[int, NoneType]): The number of iterations. When this reaches, the training stops. If both n_epochs and n_iters are not specified, a default setting would be used that sets n_iters to 200 for a dataset with size <= 100000, 600 otherwise.
verbose (bool): Whether to print the training loss after each epoch.
Returns:
loss_log: a list containing the training losses on each epoch.
'''
assert train_data.ndim == 3
if n_iters is None and n_epochs is None:
n_iters = 200 if train_data.size <= 100000 else 600 # default param for n_iters
if self.max_train_length is not None:
sections = train_data.shape[1] // self.max_train_length
if sections >= 2:
train_data = np.concatenate(split_with_nan(train_data, sections, axis=1), axis=0)
# train_data: (n_instance*sections, max_train_length, n_features)
temporal_missing = np.isnan(train_data).all(axis=-1).any(axis=0) # (max_train_length)
if temporal_missing[0] or temporal_missing[-1]: # whether the head or tail exists nan
train_data = centerize_vary_length_series(train_data)
train_data = train_data[~np.isnan(train_data).all(axis=2).all(axis=1)]
# delete the sequence (max_train_length, n_features) contains only nan
for i in range(train_data.shape[0]):
train_data[i][np.isnan(train_data[i])] = np.nanmean(train_data[i])
train_dataset = TensorDataset(torch.from_numpy(train_data).to(torch.float))
train_loader = DataLoader(train_dataset, batch_size=min(self.batch_size, len(train_dataset)), shuffle=True, drop_last=True)
optimizer = torch.optim.AdamW(self.net.parameters(), lr=self.lr)
loss_log = []
while True:
if n_epochs is not None and self.n_epochs >= n_epochs:
break
cum_loss = 0
n_epoch_iters = 0
interrupted = False
for batch in train_loader:
if n_iters is not None and self.n_iters >= n_iters:
interrupted = True
break
x = batch[0] #(batch_size, n_timestamps, n_features)
# print("#####################")
# raise Exception('my personal exception!')
if self.max_train_length is not None and x.size(1) > self.max_train_length:
window_offset = np.random.randint(x.size(1) - self.max_train_length + 1)
x = x[:, window_offset : window_offset + self.max_train_length]
x = x.to(self.device)
optimizer.zero_grad()
outputs, z_mu, z_log_var, x_mu, x_log_var = self.net(x)
loss = self.net.loss_function(x, outputs, z_mu, z_log_var, x_mu, x_log_var, self.z_kld_weight, self.x_kld_weight)
loss.backward()
optimizer.step()
cum_loss += loss.item()
n_epoch_iters += 1
self.n_iters += 1
if self.after_iter_callback is not None:
self.after_iter_callback(self, loss.item())
if interrupted:
break
cum_loss /= n_epoch_iters
loss_log.append(cum_loss)
if verbose:
print(f"Epoch #{self.n_epochs}: loss={cum_loss}")
self.n_epochs += 1
if self.after_epoch_callback is not None:
self.after_epoch_callback(self, cum_loss)
return loss_log
def anomaly_score(self, model, test_data, is_multi=False):
if is_multi:
test_data = torch.from_numpy(np.float32(test_data.reshape(1, -1, self.input_dims))).to(self.device)
else:
test_data = torch.from_numpy(np.float32(test_data.reshape(1, -1, 1))).to(self.device)
# test_data = torch.from_numpy(np.float32(test_data.reshape(1, -1, 1))).to(self.device)
if self.max_train_length is not None and test_data.size(1) > self.max_train_length:
window_offset = np.random.randint(test_data.size(1) - self.max_train_length + 1)
test_data = test_data[:, window_offset: window_offset + self.max_train_length]
# 设置批次大小
batch_size = 2
# 创建 DataLoader
test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=False)
self.net.eval()
with torch.no_grad():
# 初始化保存输出的列表
outputs_list = []
# z_mu_list = []
# z_log_var_list = []
# x_mu_list = []
# x_log_var_list = []
for input_data in test_loader:
input_data = input_data[0] # 从 TensorDataset 中提取数据
# x = x.to(self.device)
print("input_data.shape = ", input_data.shape)
batch_outputs, batch_z_mu, batch_z_log_var, batch_x_mu, batch_x_log_var = self.net(input_data)
# 保存每个批次的结果
outputs_list.append(batch_outputs)
# z_mu_list.append(batch_z_mu)
# z_log_var_list.append(batch_z_log_var)
# x_mu_list.append(batch_x_mu)
# x_log_var_list.append(batch_x_log_var)
# 将所有批次结果整合
outputs = torch.cat(outputs_list, dim=0)
# z_mu = torch.cat(z_mu_list, dim=0)
# z_log_var = torch.cat(z_log_var_list, dim=0)
# x_mu = torch.cat(x_mu_list, dim=0)
# x_log_var = torch.cat(x_log_var_list, dim=0)
# print("test_data.shape = ", test_data.shape)
# print("self.net = ", self.net)
# outputs, z_mu, z_log_var, x_mu, x_log_var = self.net(test_data)
# rec_error = torch.sum(torch.abs(outputs - test_data), dim=-1)
rec_error = torch.sum(torch.square(outputs - test_data), dim=-1)
rec_error = torch.flatten(rec_error)
return rec_error
def evaluate(self, model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay, is_multi=False, ucr_index=None):
t = time.time()
res_log = []
labels_log = []
timestamps_log = []
res_log_socres = []
if is_multi:
train_data = all_train_data
test_data = all_test_data
test_labels = all_test_labels
print("train_data.shape = ", train_data.shape, ", test_data.shape = ", test_data.shape)
train_err = self.anomaly_score(model, train_data, is_multi=is_multi).detach().cpu().numpy()
test_err = self.anomaly_score(model, test_data, is_multi=is_multi).detach().cpu().numpy()
ma = np_shift(bn.move_mean(np.concatenate([train_err, test_err]), 21), 1)
train_err_adj = (train_err - ma[:len(train_err)]) / ma[:len(train_err)]
test_err_adj = (test_err - ma[len(train_err):]) / ma[len(train_err):]
train_err_adj = train_err_adj[22:]
thr = np.mean(train_err_adj) + 4 * np.std(train_err_adj)
test_res = (test_err_adj > thr) * 1
res_log_socres.append(test_err_adj)
for i in range(len(test_res)):
if i >= delay and test_res[i - delay:i].sum() >= 1:
test_res[i] = 0
res_log.append(test_res)
labels_log.append(test_labels)
else:
for k in all_test_data:
train_data = all_train_data[k]
train_labels = all_train_labels[k]
train_timestamps = all_train_timestamps[k]
test_data = all_test_data[k]
test_labels = all_test_labels[k]
test_timestamps = all_test_timestamps[k]
train_err = self.anomaly_score(model, train_data).detach().cpu().numpy()
test_err = self.anomaly_score(model, test_data).detach().cpu().numpy()
ma = np_shift(bn.move_mean(np.concatenate([train_err, test_err]), 21), 1)
train_err_adj = (train_err - ma[:len(train_err)]) / ma[:len(train_err)]
test_err_adj = (test_err - ma[len(train_err):]) / ma[len(train_err):]
train_err_adj = train_err_adj[22:]
thr = np.mean(train_err_adj) + 4 * np.std(train_err_adj)
test_res = (test_err_adj > thr) * 1
res_log_socres.append(test_err_adj)
for i in range(len(test_res)):
if i >= delay and test_res[i-delay:i].sum() >= 1:
test_res[i] = 0
res_log.append(test_res)
labels_log.append(test_labels)
timestamps_log.append(test_timestamps)
t = time.time() - t
if is_multi:
if ucr_index == 79 or ucr_index == 108 or ucr_index == 187 or ucr_index == 203:
labels = np.asarray(labels_log, np.int64)[0]
pred = np.asarray(res_log, np.int64)[0]
labels, pred = adjustment(labels, pred)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": None,
"Affiliation recall": None,
"R_AUC_ROC": None,
"R_AUC_PR": None,
"VUS_ROC": None,
"VUS_PR": None,
'f1_pa_10': None,
'f1_pa_50': None,
'f1_pa_90': None,
}
else:
labels = np.asarray(labels_log, np.int64)[0]
pred = np.asarray(res_log, np.int64)[0]
# print("labels.shape = ", labels.shape, labels[:5])
# print("pred.shape = ", pred.shape, pred[:5])
events_pred = convert_vector_to_events(pred)
events_gt = convert_vector_to_events(labels)
Trange = (0, len(labels))
affiliation = pr_from_events(events_pred, events_gt, Trange)
vus_results = get_range_vus_roc(labels, pred, 100) # default slidingWindow = 100
pred_scores = np.asarray(res_log_socres, np.float64)[0]
results_f1_pa_k_10 = evaluate.evaluate(pred_scores, labels, k=10)
results_f1_pa_k_50 = evaluate.evaluate(pred_scores, labels, k=50)
results_f1_pa_k_90 = evaluate.evaluate(pred_scores, labels, k=90)
labels, pred = adjustment(labels, pred)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"],
'f1_pa_10': results_f1_pa_k_10['best_f1_w_pa'],
'f1_pa_50': results_f1_pa_k_50['best_f1_w_pa'],
'f1_pa_90': results_f1_pa_k_90['best_f1_w_pa'],
}
else:
eval_res = eval_ad_result(res_log, labels_log, timestamps_log, delay, pred_scores=res_log_socres)
eval_res['infer_time'] = t
return res_log, eval_res
def save(self, fn):
''' Save the model to a file.
Args:
fn (str): filename.
'''
torch.save(self.net.state_dict(), fn)
def load(self, fn):
''' Load the model from a file.
Args:
fn (str): filename.
'''
state_dict = torch.load(fn, map_location=self.device)
self.net.load_state_dict(state_dict)
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/exp_anomaly_detection.py
================================================
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import accuracy_score
import torch.multiprocessing
from other_anomaly_baselines.models import TimesNet
from other_anomaly_baselines.models import GPT4TS
torch.multiprocessing.set_sharing_strategy('file_system')
import torch
import torch.nn as nn
from torch import optim
import os
import time
import warnings
import numpy as np
import math
from other_anomaly_baselines.metrics.metrics import *
import warnings
from tadpak import evaluate
from torch.utils.data import TensorDataset, DataLoader
warnings.filterwarnings('ignore')
class UniLoader_train(object):
def __init__(self, data_set, win_size, step, mode="train"):
self.mode = mode
self.step = step
self.win_size = win_size
self.train = data_set
def __len__(self):
"""
Number of images in the object dataset.
"""
return (self.train.shape[0] - self.win_size) // self.step + 1
def __getitem__(self, index):
index = index * self.step
return np.float32(self.train[index:index + self.win_size])
class UniLoader_test(object):
def __init__(self, data_set, label_set, win_size, step, mode="train"):
self.mode = mode
self.step = step
self.win_size = win_size
self.train = data_set
self.train_labels = label_set
def __len__(self):
"""
Number of images in the object dataset.
"""
return (self.train.shape[0] - self.win_size) // self.step + 1
def __getitem__(self, index):
index = index * self.step
return np.float32(self.train[index:index + self.win_size]), np.float32(self.train_labels[0:self.win_size])
def adjustment(gt, pred):
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
return gt, pred
def adjust_learning_rate(optimizer, epoch, args):
# lr = args.learning_rate * (0.2 ** (epoch // 2))
if args.lradj == 'type1':
lr_adjust = {epoch: args.learning_rate * (0.5 ** ((epoch - 1) // 1))}
elif args.lradj == 'type2':
lr_adjust = {
2: 5e-5, 4: 1e-5, 6: 5e-6, 8: 1e-6,
10: 5e-7, 15: 1e-7, 20: 5e-8
}
elif args.lradj == "cosine":
lr_adjust = {epoch: args.learning_rate /2 * (1 + math.cos(epoch / args.train_epochs * math.pi))}
if epoch in lr_adjust.keys():
lr = lr_adjust[epoch]
for param_group in optimizer.param_groups:
param_group['lr'] = lr
print('Updating learning rate to {}'.format(lr))
class EarlyStopping:
def __init__(self, patience=7, verbose=False, delta=0):
self.patience = patience
self.verbose = verbose
self.counter = 0
self.best_score = None
self.early_stop = False
self.val_loss_min = np.Inf
self.delta = delta
def __call__(self, val_loss, model, path):
score = -val_loss
if self.best_score is None:
self.best_score = score
self.save_checkpoint(val_loss, model, path)
elif score < self.best_score + self.delta:
self.counter += 1
print(f'EarlyStopping counter: {self.counter} out of {self.patience}')
if self.counter >= self.patience:
self.early_stop = True
else:
self.best_score = score
self.save_checkpoint(val_loss, model, path)
self.counter = 0
def save_checkpoint(self, val_loss, model, path):
if self.verbose:
print(f'Validation loss decreased ({self.val_loss_min:.6f} --> {val_loss:.6f}). Saving model ...')
torch.save(model.state_dict(), path + '/' + 'checkpoint.pth')
self.val_loss_min = val_loss
class Exp_Basic(object):
def __init__(self, args):
self.args = args
self.model_dict = {
'TimesNet': TimesNet,
'GPT4TS': GPT4TS,
}
self.device = self._acquire_device()
self.model = self._build_model().to(self.device)
def _build_model(self):
raise NotImplementedError
return None
def _acquire_device(self):
if self.args.use_gpu:
os.environ["CUDA_VISIBLE_DEVICES"] = str(
self.args.gpu) if not self.args.use_multi_gpu else self.args.devices
device = torch.device('cuda:{}'.format(self.args.gpu))
print('Use GPU: cuda:{}'.format(self.args.gpu))
else:
device = torch.device('cpu')
print('Use CPU')
return device
def _get_data(self):
pass
def vali(self):
pass
def train(self):
pass
def test(self):
pass
class Exp_Anomaly_Detection(Exp_Basic):
def __init__(self, args, train_set, train_loader, val_set, val_loader, test_set, test_loader):
super(Exp_Anomaly_Detection, self).__init__(args)
self.train_set = train_set
self.train_loader = train_loader
self.val_set = val_set
self.val_loader = val_loader
self.test_set = test_set
self.test_loader = test_loader
def _build_model(self):
model = self.model_dict[self.args.model].Model(self.args).float()
if self.args.use_multi_gpu and self.args.use_gpu:
model = nn.DataParallel(model, device_ids=self.args.device_ids)
return model
def _get_data(self, flag):
# data_set, data_loader = data_provider(self.args, flag)
if flag == 'train':
return self.train_set, self.train_loader
if flag == 'val':
return self.val_set, self.val_loader
if flag == 'test':
return self.test_set, self.test_loader
# return self.data_set, self.data_loader
def _select_optimizer(self):
model_optim = optim.Adam(self.model.parameters(), lr=self.args.learning_rate)
return model_optim
def _select_criterion(self):
criterion = nn.MSELoss()
return criterion
def vali(self, vali_data, vali_loader, criterion):
total_loss = []
self.model.eval()
with torch.no_grad():
for i, (batch_x, _) in enumerate(vali_loader):
batch_x = batch_x.float().to(self.device)
outputs = self.model(batch_x, None, None, None)
f_dim = -1 if self.args.features == 'MS' else 0
outputs = outputs[:, :, f_dim:]
pred = outputs.detach().cpu()
true = batch_x.detach().cpu()
loss = criterion(pred, true)
total_loss.append(loss)
total_loss = np.average(total_loss)
self.model.train()
return total_loss
def vali_uni(self, vali_data, vali_loader, criterion):
total_loss = []
self.model.eval()
with torch.no_grad():
for i, batch_x in enumerate(vali_loader):
batch_x = batch_x.float().to(self.device)
outputs = self.model(batch_x, None, None, None)
f_dim = -1 if self.args.features == 'MS' else 0
outputs = outputs[:, :, f_dim:]
pred = outputs.detach().cpu()
true = batch_x.detach().cpu()
loss = criterion(pred, true)
total_loss.append(loss)
total_loss = np.average(total_loss)
self.model.train()
return total_loss
def train(self, setting):
train_data, train_loader = self._get_data(flag='train')
vali_data, vali_loader = self._get_data(flag='val')
test_data, test_loader = self._get_data(flag='test')
path = os.path.join(self.args.checkpoints, setting)
if not os.path.exists(path):
os.makedirs(path)
time_now = time.time()
train_steps = len(train_loader)
early_stopping = EarlyStopping(patience=self.args.patience, verbose=True)
model_optim = self._select_optimizer()
criterion = self._select_criterion()
for epoch in range(self.args.train_epochs):
iter_count = 0
train_loss = []
self.model.train()
epoch_time = time.time()
for i, (batch_x, batch_y) in enumerate(train_loader):
iter_count += 1
model_optim.zero_grad()
batch_x = batch_x.float().to(self.device)
outputs = self.model(batch_x, None, None, None)
f_dim = -1 if self.args.features == 'MS' else 0
outputs = outputs[:, :, f_dim:]
loss = criterion(outputs, batch_x)
train_loss.append(loss.item())
# print("loss = ", loss)
# print("batch_x.shape = ", batch_x.shape, ", outputs.shape = ", outputs.shape)
if (i + 1) % 100 == 0:
print("\titers: {0}, epoch: {1} | loss: {2:.7f}".format(i + 1, epoch + 1, loss.item()))
speed = (time.time() - time_now) / iter_count
left_time = speed * ((self.args.train_epochs - epoch) * train_steps - i)
print('\tspeed: {:.4f}s/iter; left time: {:.4f}s'.format(speed, left_time))
iter_count = 0
time_now = time.time()
loss.backward()
model_optim.step()
print("Epoch: {} cost time: {}".format(epoch + 1, time.time() - epoch_time))
train_loss = np.average(train_loss)
vali_loss = self.vali(vali_data, vali_loader, criterion)
test_loss = self.vali(test_data, test_loader, criterion)
print("Epoch: {0}, Steps: {1} | Train Loss: {2:.7f} Vali Loss: {3:.7f} Test Loss: {4:.7f}".format(
epoch + 1, train_steps, train_loss, vali_loss, test_loss))
early_stopping(vali_loss, self.model, path)
if early_stopping.early_stop:
print("Early stopping")
break
adjust_learning_rate(model_optim, epoch + 1, self.args)
best_model_path = path + '/' + 'checkpoint.pth'
self.model.load_state_dict(torch.load(best_model_path))
return self.model
def train_uni(self, setting):
train_data, train_loader = self._get_data(flag='train')
vali_data, vali_loader = self._get_data(flag='val')
test_data, test_loader = self._get_data(flag='test')
path = os.path.join(self.args.checkpoints, setting)
if not os.path.exists(path):
os.makedirs(path)
time_now = time.time()
train_steps = len(train_loader)
early_stopping = EarlyStopping(patience=self.args.patience, verbose=True)
model_optim = self._select_optimizer()
criterion = self._select_criterion()
for epoch in range(self.args.train_epochs):
iter_count = 0
train_loss = []
self.model.train()
epoch_time = time.time()
for i, batch_x in enumerate(train_loader):
iter_count += 1
model_optim.zero_grad()
batch_x = batch_x.float().to(self.device)
# print("batch_x.shape = ", batch_x.shape, ", batch_x[:5] = ", batch_x[:5])
outputs = self.model(batch_x, None, None, None)
f_dim = -1 if self.args.features == 'MS' else 0
outputs = outputs[:, :, f_dim:]
loss = criterion(outputs, batch_x)
train_loss.append(loss.item())
# print("loss = ", loss)
# print("batch_x.shape = ", batch_x.shape, ", outputs.shape = ", outputs.shape)
if (i + 1) % 100 == 0:
print("\titers: {0}, epoch: {1} | loss: {2:.7f}".format(i + 1, epoch + 1, loss.item()))
speed = (time.time() - time_now) / iter_count
left_time = speed * ((self.args.train_epochs - epoch) * train_steps - i)
print('\tspeed: {:.4f}s/iter; left time: {:.4f}s'.format(speed, left_time))
iter_count = 0
time_now = time.time()
loss.backward()
model_optim.step()
print("Epoch: {} cost time: {}".format(epoch + 1, time.time() - epoch_time))
train_loss = np.average(train_loss)
vali_loss = self.vali_uni(vali_data, vali_loader, criterion)
# test_loss = self.vali(test_data, test_loader, criterion)
print("Epoch: {0}, Steps: {1} | Train Loss: {2:.7f}".format(
epoch + 1, train_steps, train_loss))
early_stopping(vali_loss, self.model, path)
if early_stopping.early_stop:
print("Early stopping")
break
adjust_learning_rate(model_optim, epoch + 1, self.args)
best_model_path = path + '/' + 'checkpoint.pth'
self.model.load_state_dict(torch.load(best_model_path))
return self.model
def test(self, setting, test=0, dataset=None, ucr_index=None):
test_data, test_loader = self._get_data(flag='test')
train_data, train_loader = self._get_data(flag='train')
if test:
print('loading model')
self.model.load_state_dict(torch.load(os.path.join('./checkpoints/' + setting, 'checkpoint.pth')))
attens_energy = []
folder_path = './test_results/' + setting + '/'
if not os.path.exists(folder_path):
os.makedirs(folder_path)
self.model.eval()
self.anomaly_criterion = nn.MSELoss(reduce=False)
# (1) stastic on the train set
with torch.no_grad():
for i, (batch_x, batch_y) in enumerate(train_loader):
batch_x = batch_x.float().to(self.device)
# reconstruction
outputs = self.model(batch_x, None, None, None)
# criterion
score = torch.mean(self.anomaly_criterion(batch_x, outputs), dim=-1)
score = score.detach().cpu().numpy()
attens_energy.append(score)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
train_energy = np.array(attens_energy)
# (2) find the threshold
attens_energy = []
test_labels = []
for i, (batch_x, batch_y) in enumerate(test_loader):
batch_x = batch_x.float().to(self.device)
# reconstruction
outputs = self.model(batch_x, None, None, None)
# criterion
score = torch.mean(self.anomaly_criterion(batch_x, outputs), dim=-1)
score = score.detach().cpu().numpy()
attens_energy.append(score)
test_labels.append(batch_y)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
test_energy = np.array(attens_energy)
combined_energy = np.concatenate([train_energy, test_energy], axis=0)
threshold = np.percentile(combined_energy, 100 - self.args.anomaly_ratio)
print("Threshold :", threshold)
# (3) evaluation on the test set
pred = (test_energy > threshold).astype(int)
test_labels = np.concatenate(test_labels, axis=0).reshape(-1)
test_labels = np.array(test_labels)
gt = test_labels.astype(int)
print("pred: ", pred.shape)
print("gt: ", gt.shape)
# if dataset == 'UCR':
eval_res = {
'f1': None,
'precision': None,
'recall': None,
"Affiliation precision": None,
"Affiliation recall": None,
"R_AUC_ROC": None,
"R_AUC_PR": None,
"VUS_ROC": None,
"VUS_PR": None,
'f1_pa_10': None,
'f1_pa_50': None,
'f1_pa_90': None,
}
# else:
#
# results_f1_pa_k_10 = evaluate.evaluate(test_energy, test_labels, k=10)
# results_f1_pa_k_50 = evaluate.evaluate(test_energy, test_labels, k=50)
# results_f1_pa_k_90 = evaluate.evaluate(test_energy, test_labels, k=90)
#
# eval_res = {
# 'f1': None,
# 'precision': None,
# 'recall': None,
# "Affiliation precision": None,
# "Affiliation recall": None,
# "R_AUC_ROC": None,
# "R_AUC_PR": None,
# "VUS_ROC": None,
# "VUS_PR": None,
# 'f1_pa_10': results_f1_pa_k_10['best_f1_w_pa'],
# 'f1_pa_50': results_f1_pa_k_50['best_f1_w_pa'],
# 'f1_pa_90': results_f1_pa_k_90['best_f1_w_pa'],
# }
if ucr_index == 79 or ucr_index == 108 or ucr_index == 187 or ucr_index == 203:
pass
else:
if dataset == 'SMD' or dataset == 'NIPS_TS_Swan' or dataset == 'NIPS_TS_Water' or dataset == 'SWAT':
pass
else:
scores_simple = combine_all_evaluation_scores(pred, gt, test_energy)
for key, value in scores_simple.items():
if key == 'Affiliation precision':
eval_res["Affiliation precision"] = value
if key == 'Affiliation recall':
eval_res["Affiliation recall"] = value
if key == 'R_AUC_ROC':
eval_res["R_AUC_ROC"] = value
if key == 'R_AUC_PR':
eval_res["R_AUC_PR"] = value
if key == 'VUS_ROC':
eval_res["VUS_ROC"] = value
if key == 'VUS_PR':
eval_res["VUS_PR"] = value
# (4) detection adjustment
gt, pred = adjustment(gt, pred)
pred = np.array(pred)
gt = np.array(gt)
print("pred: ", pred.shape)
print("gt: ", gt.shape)
accuracy = accuracy_score(gt, pred)
precision, recall, f_score, support = precision_recall_fscore_support(gt, pred, average='binary')
print("Accuracy : {:0.4f}, Precision : {:0.4f}, Recall : {:0.4f}, F-score : {:0.4f} ".format(
accuracy, precision,
recall, f_score))
eval_res['f1'] = f_score
eval_res['precision'] = precision
eval_res['recall'] = recall
f = open("result_anomaly_detection.txt", 'a')
f.write(setting + " \n")
f.write("Accuracy : {:0.4f}, Precision : {:0.4f}, Recall : {:0.4f}, F-score : {:0.4f} ".format(
accuracy, precision,
recall, f_score))
f.write('\n')
f.write('\n')
f.close()
return eval_res
def test_uni(self, setting, all_train_data, all_test_data, all_test_labels, all_test_timestamps, delay, config, test=0):
# test_data, test_loader = self._get_data(flag='test')
# train_data, train_loader = self._get_data(flag='train')
if test:
print('loading model')
self.model.load_state_dict(torch.load(os.path.join('./checkpoints/' + setting, 'checkpoint.pth')))
attens_energy = []
folder_path = './test_results/' + setting + '/'
if not os.path.exists(folder_path):
os.makedirs(folder_path)
self.model.eval()
self.anomaly_criterion = nn.MSELoss(reduce=False)
# (1) stastic on the train set
with torch.no_grad():
# for i, (batch_x, batch_y) in enumerate(train_loader):
# batch_x = batch_x.float().to(self.device)
# # reconstruction
# outputs = self.model(batch_x, None, None, None)
# # criterion
# score = torch.mean(self.anomaly_criterion(batch_x, outputs), dim=-1)
# score = score.detach().cpu().numpy()
# attens_energy.append(score)
for k in all_train_data:
train_data = all_train_data[k]
train_data = np.array(train_data)
# train_data =
train_data = np.expand_dims(train_data, axis=-1)
train_dataset = UniLoader_train(train_data, config.win_size, 1)
train_loader = DataLoader(dataset=train_dataset,
batch_size=config.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
for i, input_data in enumerate(train_loader):
# print("type(input) = ", type(input_data), input_data.shape)
batch_x = input_data.float().to(self.device)
# reconstruction
outputs = self.model(batch_x, None, None, None)
# criterion
score = torch.mean(self.anomaly_criterion(batch_x, outputs), dim=-1)
score = score.detach().cpu().numpy()
attens_energy.append(score)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
train_energy = np.array(attens_energy)
# (2) find the threshold
attens_energy = []
test_labels = []
with torch.no_grad():
for k in all_train_data:
_test_labels = all_test_labels[k]
test_data = all_test_data[k]
test_data = np.array(test_data)
test_data = np.expand_dims(test_data, axis=-1)
test_dataset = UniLoader_test(test_data, _test_labels, config.win_size, 1)
test_loader = DataLoader(dataset=test_dataset,
batch_size=config.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
for i, (input_data, labels) in enumerate(test_loader):
batch_x = input_data.float().to(self.device)
outputs = self.model(batch_x, None, None, None)
# criterion
score = torch.mean(self.anomaly_criterion(batch_x, outputs), dim=-1)
score = score.detach().cpu().numpy()
attens_energy.append(score)
test_labels.append(labels)
attens_energy = np.concatenate(attens_energy, axis=0).reshape(-1)
test_energy = np.array(attens_energy)
combined_energy = np.concatenate([train_energy, test_energy], axis=0)
threshold = np.percentile(combined_energy, 100 - self.args.anomaly_ratio)
print("Threshold :", threshold)
# (3) evaluation on the test set
pred = (test_energy > threshold).astype(int)
test_labels = np.concatenate(test_labels, axis=0).reshape(-1)
test_labels = np.array(test_labels)
gt = test_labels.astype(int)
print("pred: ", pred.shape)
print("gt: ", gt.shape)
# if dataset == 'UCR':
eval_res = {
'f1': None,
'precision': None,
'recall': None,
"Affiliation precision": None,
"Affiliation recall": None,
"R_AUC_ROC": None,
"R_AUC_PR": None,
"VUS_ROC": None,
"VUS_PR": None,
'f1_pa_10': None,
'f1_pa_50': None,
'f1_pa_90': None,
}
# scores_simple = combine_all_evaluation_scores(pred, gt, test_energy)
# for key, value in scores_simple.items():
# if key == 'Affiliation precision':
# eval_res["Affiliation precision"] = value
# if key == 'Affiliation recall':
# eval_res["Affiliation recall"] = value
# if key == 'R_AUC_ROC':
# eval_res["R_AUC_ROC"] = value
# if key == 'R_AUC_PR':
# eval_res["R_AUC_PR"] = value
# if key == 'VUS_ROC':
# eval_res["VUS_ROC"] = value
# if key == 'VUS_PR':
# eval_res["VUS_PR"] = value
# (4) detection adjustment
gt, pred = adjustment(gt, pred)
pred = np.array(pred)
gt = np.array(gt)
print("pred: ", pred.shape)
print("gt: ", gt.shape)
accuracy = accuracy_score(gt, pred)
precision, recall, f_score, support = precision_recall_fscore_support(gt, pred, average='binary')
print("Accuracy : {:0.4f}, Precision : {:0.4f}, Recall : {:0.4f}, F-score : {:0.4f} ".format(
accuracy, precision,
recall, f_score))
eval_res['f1'] = f_score
eval_res['precision'] = precision
eval_res['recall'] = recall
f = open("result_anomaly_detection.txt", 'a')
f.write(setting + " \n")
f.write("Accuracy : {:0.4f}, Precision : {:0.4f}, Recall : {:0.4f}, F-score : {:0.4f} ".format(
accuracy, precision,
recall, f_score))
f.write('\n')
f.write('\n')
f.close()
return eval_res
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/hello_test_evo.py
================================================
print("Hello World!!!")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/lstm_vae.py
================================================
import torch
import torch.nn.functional as F
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
from models.lstm_vae_model import LSTM_VAE_Model
from utils import split_with_nan, centerize_vary_length_series
import math
import time
from tasks.anomaly_detection import eval_ad_result, np_shift
import bottleneck as bn
from sklearn.metrics import f1_score, precision_score, recall_score
from other_anomaly_baselines.metrics.affiliation.metrics import pr_from_events
from other_anomaly_baselines.metrics.vus.metrics import get_range_vus_roc
from other_anomaly_baselines.metrics.affiliation.generics import convert_vector_to_events
from tadpak import evaluate
def adjustment(gt, pred):
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
return gt, pred
class LSTM_VAE:
def __init__(
self,
input_dims,
hidden_size=16,
hidden_dim=3,
device='cuda',
lr=0.001,
batch_size=8,
z_kld_weight=0.1,
x_kld_weight=0.1,
max_train_length=None,
after_iter_callback=None,
after_epoch_callback=None
):
super().__init__()
self.device = device
self.lr = lr
self.batch_size = batch_size
self.z_kld_weight = z_kld_weight
self.x_kld_weight = x_kld_weight
self.max_train_length = max_train_length
self.input_dims = input_dims
self.net = LSTM_VAE_Model(device=self.device, in_channel=input_dims, hidden_size=hidden_size, hidden_dim=hidden_dim).to(self.device)
self.after_iter_callback = after_iter_callback
self.after_epoch_callback = after_epoch_callback
self.n_epochs = 0
self.n_iters = 0
def train(self, train_data, n_epochs=None, n_iters=None, verbose=False):
'''
Args:
train_data (numpy.ndarray): The training data. It should have a shape of (n_instance, n_timestamps, n_features). All missing data should be set to NaN.
n_epochs (Union[int, NoneType]): The number of epochs. When this reaches, the training stops.
n_iters (Union[int, NoneType]): The number of iterations. When this reaches, the training stops. If both n_epochs and n_iters are not specified, a default setting would be used that sets n_iters to 200 for a dataset with size <= 100000, 600 otherwise.
verbose (bool): Whether to print the training loss after each epoch.
Returns:
loss_log: a list containing the training losses on each epoch.
'''
assert train_data.ndim == 3
if n_iters is None and n_epochs is None:
n_iters = 200 if train_data.size <= 100000 else 600 # default param for n_iters
if self.max_train_length is not None:
sections = train_data.shape[1] // self.max_train_length
if sections >= 2:
train_data = np.concatenate(split_with_nan(train_data, sections, axis=1), axis=0)
# train_data: (n_instance*sections, max_train_length, n_features)
temporal_missing = np.isnan(train_data).all(axis=-1).any(axis=0) # (max_train_length)
if temporal_missing[0] or temporal_missing[-1]: # whether the head or tail exists nan
train_data = centerize_vary_length_series(train_data)
train_data = train_data[~np.isnan(train_data).all(axis=2).all(axis=1)]
# delete the sequence (max_train_length, n_features) contains only nan
for i in range(train_data.shape[0]):
train_data[i][np.isnan(train_data[i])] = np.nanmean(train_data[i])
train_dataset = TensorDataset(torch.from_numpy(train_data).to(torch.float))
train_loader = DataLoader(train_dataset, batch_size=min(self.batch_size, len(train_dataset)), shuffle=True, drop_last=True)
optimizer = torch.optim.AdamW(self.net.parameters(), lr=self.lr)
loss_log = []
while True:
if n_epochs is not None and self.n_epochs >= n_epochs:
break
cum_loss = 0
n_epoch_iters = 0
interrupted = False
for batch in train_loader:
if n_iters is not None and self.n_iters >= n_iters:
interrupted = True
break
x = batch[0] #(batch_size, n_timestamps, n_features)
# print("#####################")
# raise Exception('my personal exception!')
if self.max_train_length is not None and x.size(1) > self.max_train_length:
window_offset = np.random.randint(x.size(1) - self.max_train_length + 1)
x = x[:, window_offset : window_offset + self.max_train_length]
x = x.to(self.device)
optimizer.zero_grad()
outputs, z_mu, z_log_var, x_mu, x_log_var = self.net(x)
loss = self.net.loss_function(x, outputs, z_mu, z_log_var, x_mu, x_log_var, self.z_kld_weight, self.x_kld_weight)
loss.backward()
optimizer.step()
cum_loss += loss.item()
n_epoch_iters += 1
self.n_iters += 1
if self.after_iter_callback is not None:
self.after_iter_callback(self, loss.item())
if interrupted:
break
cum_loss /= n_epoch_iters
loss_log.append(cum_loss)
if verbose:
print(f"Epoch #{self.n_epochs}: loss={cum_loss}")
self.n_epochs += 1
if self.after_epoch_callback is not None:
self.after_epoch_callback(self, cum_loss)
return loss_log
def anomaly_score(self, model, test_data, is_multi=False):
if is_multi:
test_data = torch.from_numpy(np.float32(test_data.reshape(1, -1, self.input_dims))).to(self.device)
else:
test_data = torch.from_numpy(np.float32(test_data.reshape(1, -1, 1))).to(self.device)
if self.max_train_length is not None and test_data.size(1) > self.max_train_length:
window_offset = np.random.randint(test_data.size(1) - self.max_train_length + 1)
test_data = test_data[:, window_offset: window_offset + self.max_train_length]
self.net.eval()
with torch.no_grad():
outputs, z_mu, z_log_var, x_mu, x_log_var = self.net(test_data)
# rec_error = torch.sum(torch.abs(outputs - test_data), dim=-1)
rec_error = torch.sum(torch.square(outputs - test_data), dim=-1)
rec_error = torch.flatten(rec_error)
return rec_error
def evaluate(self, model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay, is_multi=False, ucr_index=None):
t = time.time()
res_log = []
labels_log = []
timestamps_log = []
res_log_socres = []
if is_multi:
train_data = all_train_data
test_data = all_test_data
test_labels = all_test_labels
train_err = self.anomaly_score(model, train_data, is_multi=is_multi).detach().cpu().numpy()
test_err = self.anomaly_score(model, test_data, is_multi=is_multi).detach().cpu().numpy()
ma = np_shift(bn.move_mean(np.concatenate([train_err, test_err]), 21), 1)
train_err_adj = (train_err - ma[:len(train_err)]) / ma[:len(train_err)]
test_err_adj = (test_err - ma[len(train_err):]) / ma[len(train_err):]
train_err_adj = train_err_adj[22:]
thr = np.mean(train_err_adj) + 4 * np.std(train_err_adj)
test_res = (test_err_adj > thr) * 1
res_log_socres.append(test_err_adj)
for i in range(len(test_res)):
if i >= delay and test_res[i - delay:i].sum() >= 1:
test_res[i] = 0
res_log.append(test_res)
labels_log.append(test_labels)
else:
for k in all_test_data:
train_data = all_train_data[k]
train_labels = all_train_labels[k]
train_timestamps = all_train_timestamps[k]
test_data = all_test_data[k]
test_labels = all_test_labels[k]
test_timestamps = all_test_timestamps[k]
train_err = self.anomaly_score(model, train_data).detach().cpu().numpy()
test_err = self.anomaly_score(model, test_data).detach().cpu().numpy()
ma = np_shift(bn.move_mean(np.concatenate([train_err, test_err]), 21), 1)
train_err_adj = (train_err - ma[:len(train_err)]) / ma[:len(train_err)]
test_err_adj = (test_err - ma[len(train_err):]) / ma[len(train_err):]
train_err_adj = train_err_adj[22:]
thr = np.mean(train_err_adj) + 4 * np.std(train_err_adj)
test_res = (test_err_adj > thr) * 1
res_log_socres.append(test_err_adj)
for i in range(len(test_res)):
if i >= delay and test_res[i-delay:i].sum() >= 1:
test_res[i] = 0
res_log.append(test_res)
labels_log.append(test_labels)
timestamps_log.append(test_timestamps)
t = time.time() - t
if is_multi:
labels = np.asarray(labels_log, np.int64)[0]
pred = np.asarray(res_log, np.int64)[0]
# print("labels.shape = ", labels.shape, labels[:5])
# print("pred.shape = ", pred.shape, pred[:5])
if ucr_index == 79 or ucr_index == 108 or ucr_index == 187 or ucr_index == 203:
min_len = min(labels.shape[0], pred.shape[0])
labels = labels[:min_len]
pred = pred[:min_len]
labels, pred = adjustment(labels, pred)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": None,
"Affiliation recall": None,
"R_AUC_ROC": None,
"R_AUC_PR": None,
"VUS_ROC": None,
"VUS_PR": None,
'f1_pa_10': None,
'f1_pa_50': None,
'f1_pa_90': None,
}
else:
events_pred = convert_vector_to_events(pred)
events_gt = convert_vector_to_events(labels)
Trange = (0, len(labels))
affiliation = pr_from_events(events_pred, events_gt, Trange)
vus_results = get_range_vus_roc(labels, pred, 100) # default slidingWindow = 100
# pred_scores = np.asarray(res_log_socres, np.float64)[0]
# results_f1_pa_k_10 = evaluate.evaluate(pred_scores, labels, k=10)
# # results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
# results_f1_pa_k_50 = evaluate.evaluate(pred_scores, labels, k=50)
# results_f1_pa_k_90 = evaluate.evaluate(pred_scores, labels, k=90)
labels, pred = adjustment(labels, pred)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"],
'f1_pa_10': None,
'f1_pa_50': None,
'f1_pa_90': None,
}
else:
eval_res = eval_ad_result(res_log, labels_log, timestamps_log, delay, pred_scores=res_log_socres)
eval_res['infer_time'] = t
return res_log, eval_res
def save(self, fn):
''' Save the model to a file.
Args:
fn (str): filename.
'''
torch.save(self.net.state_dict(), fn)
def load(self, fn):
''' Load the model from a file.
Args:
fn (str): filename.
'''
state_dict = torch.load(fn, map_location=self.device)
self.net.load_state_dict(state_dict)
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/AUC.py
================================================
# used by paper: TSB-UAD as the main evaluator
# github: https://github.com/johnpaparrizos/TSB-UAD/blob/main/TSB_AD/utils/metrics.py
import numpy as np
from sklearn import metrics
from other_anomaly_baselines.metrics.evaluate_utils import find_length,range_convers_new
def extend_postive_range(x, window=16):
label = x.copy().astype(float)
# print(label)
L = range_convers_new(label) # index of non-zero segments
# print(L)
length = len(label)
for k in range(len(L)):
s = L[k][0]
e = L[k][1]
# x1 is the extended list like [1,2,3] which are non-zero(from the end-e)
x1 = np.arange(e, min(e + window // 2, length))
label[x1] += np.sqrt(1 - (x1 - e) / (window))
# before the start-s
x2 = np.arange(max(s - window // 2, 0), s)
label[x2] += np.sqrt(1 - (s - x2) / (window))
label = np.minimum(np.ones(length), label)
return label
def extend_postive_range_individual(x, percentage=0.2):
label = x.copy().astype(float)
L = range_convers_new(label) # index of non-zero segments
length = len(label)
for k in range(len(L)):
s = L[k][0]
e = L[k][1]
l0 = int((e - s + 1) * percentage)
x1 = np.arange(e, min(e + l0, length))
label[x1] += np.sqrt(1 - (x1 - e) / (2 * l0))
x2 = np.arange(max(s - l0, 0), s)
label[x2] += np.sqrt(1 - (s - x2) / (2 * l0))
label = np.minimum(np.ones(length), label)
return label
def TPR_FPR_RangeAUC(labels, pred, P, L):
product = labels * pred
TP = np.sum(product)
# recall = min(TP/P,1)
P_new = (P + np.sum(labels)) / 2 # so TPR is neither large nor small
# P_new = np.sum(labels)
recall = min(TP / P_new, 1)
# recall = TP/np.sum(labels)
# print('recall '+str(recall))
existence = 0
for seg in L:
if np.sum(product[seg[0]:(seg[1] + 1)]) > 0:
existence += 1
existence_ratio = existence / len(L)
# print(existence_ratio)
# TPR_RangeAUC = np.sqrt(recall*existence_ratio)
# print(existence_ratio)
TPR_RangeAUC = recall * existence_ratio
FP = np.sum(pred) - TP
# TN = np.sum((1-pred) * (1-labels))
# FPR_RangeAUC = FP/(FP+TN)
N_new = len(labels) - P_new
FPR_RangeAUC = FP / N_new
Precision_RangeAUC = TP / np.sum(pred)
return TPR_RangeAUC, FPR_RangeAUC, Precision_RangeAUC
def Range_AUC(score_t_test, y_test, window=5, percentage=0, plot_ROC=False, AUC_type='window'):
# AUC_type='window'/'percentage'
score = score_t_test
labels = y_test
score_sorted = -np.sort(-score)
P = np.sum(labels)
# print(np.sum(labels))
if AUC_type == 'window':
labels = extend_postive_range(labels, window=window)
else:
labels = extend_postive_range_individual(labels, percentage=percentage)
# print(np.sum(labels))
L = range_convers_new(labels)
TPR_list = [0]
FPR_list = [0]
Precision_list = [1]
for i in np.linspace(0, len(score) - 1, 250).astype(int):
threshold = score_sorted[i]
# print('thre='+str(threshold))
pred = score >= threshold
TPR, FPR, Precision = TPR_FPR_RangeAUC(labels, pred, P, L)
TPR_list.append(TPR)
FPR_list.append(FPR)
Precision_list.append(Precision)
TPR_list.append(1)
FPR_list.append(1) # otherwise, range-AUC will stop earlier than (1,1)
tpr = np.array(TPR_list)
fpr = np.array(FPR_list)
prec = np.array(Precision_list)
width = fpr[1:] - fpr[:-1]
height = (tpr[1:] + tpr[:-1]) / 2
AUC_range = np.sum(width * height)
width_PR = tpr[1:-1] - tpr[:-2]
height_PR = (prec[1:] + prec[:-1]) / 2
AP_range = np.sum(width_PR * height_PR)
if plot_ROC:
return AUC_range, AP_range, fpr, tpr, prec
return AUC_range
def point_wise_AUC(score_t_test, y_test, plot_ROC=False):
# area under curve
label = y_test
score = score_t_test
auc = metrics.roc_auc_score(label, score)
# plor ROC curve
if plot_ROC:
fpr, tpr, thresholds = metrics.roc_curve(label, score)
# display = metrics.RocCurveDisplay(fpr=fpr, tpr=tpr, roc_auc=auc)
# display.plot()
return auc, fpr, tpr
else:
return auc
def main():
y_test = np.zeros(100)
y_test[10:20] = 1
y_test[50:60] = 1
pred_labels = np.zeros(100)
pred_labels[15:17] = 0.5
pred_labels[55:62] = 0.7
# pred_labels[51:55] = 1
# true_events = get_events(y_test)
point_auc = point_wise_AUC(pred_labels, y_test)
range_auc = Range_AUC(pred_labels, y_test)
print("point_auc: {}, range_auc: {}".format(point_auc, range_auc))
if __name__ == "__main__":
main()
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/Matthews_correlation_coefficient.py
================================================
from sklearn.metrics import confusion_matrix
import numpy as np
def MCC(y_test, pred_labels):
tn, fp, fn, tp = confusion_matrix(y_test, pred_labels).ravel()
MCC_score = (tp*tn-fp*fn)/(((tp+fp)*(tp+fn)*(tn+fp)*(tn+fn))**0.5)
return MCC_score
def main():
y_test = np.zeros(100)
y_test[10:20] = 1
y_test[50:60] = 1
pred_labels = np.zeros(100)
pred_labels[15:17] = 1
pred_labels[55:62] = 1
# pred_labels[51:55] = 1
# true_events = get_events(y_test)
confusion_matric = MCC(y_test, pred_labels)
# print(confusion_matric)
if __name__ == "__main__":
main()
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/affiliation/_affiliation_zone.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from other_anomaly_baselines.metrics.affiliation._integral_interval import interval_intersection
def t_start(j, Js = [(1,2),(3,4),(5,6)], Trange = (1,10)):
"""
Helper for `E_gt_func`
:param j: index from 0 to len(Js) (included) on which to get the start
:param Js: ground truth events, as a list of couples
:param Trange: range of the series where Js is included
:return: generalized start such that the middle of t_start and t_stop
always gives the affiliation zone
"""
b = max(Trange)
n = len(Js)
if j == n:
return(2*b - t_stop(n-1, Js, Trange))
else:
return(Js[j][0])
def t_stop(j, Js = [(1,2),(3,4),(5,6)], Trange = (1,10)):
"""
Helper for `E_gt_func`
:param j: index from 0 to len(Js) (included) on which to get the stop
:param Js: ground truth events, as a list of couples
:param Trange: range of the series where Js is included
:return: generalized stop such that the middle of t_start and t_stop
always gives the affiliation zone
"""
if j == -1:
a = min(Trange)
return(2*a - t_start(0, Js, Trange))
else:
return(Js[j][1])
def E_gt_func(j, Js, Trange):
"""
Get the affiliation zone of element j of the ground truth
:param j: index from 0 to len(Js) (excluded) on which to get the zone
:param Js: ground truth events, as a list of couples
:param Trange: range of the series where Js is included, can
be (-math.inf, math.inf) for distance measures
:return: affiliation zone of element j of the ground truth represented
as a couple
"""
range_left = (t_stop(j-1, Js, Trange) + t_start(j, Js, Trange))/2
range_right = (t_stop(j, Js, Trange) + t_start(j+1, Js, Trange))/2
return((range_left, range_right))
def get_all_E_gt_func(Js, Trange):
"""
Get the affiliation partition from the ground truth point of view
:param Js: ground truth events, as a list of couples
:param Trange: range of the series where Js is included, can
be (-math.inf, math.inf) for distance measures
:return: affiliation partition of the events
"""
# E_gt is the limit of affiliation/attraction for each ground truth event
E_gt = [E_gt_func(j, Js, Trange) for j in range(len(Js))]
return(E_gt)
def affiliation_partition(Is = [(1,1.5),(2,5),(5,6),(8,9)], E_gt = [(1,2.5),(2.5,4.5),(4.5,10)]):
"""
Cut the events into the affiliation zones
The presentation given here is from the ground truth point of view,
but it is also used in the reversed direction in the main function.
:param Is: events as a list of couples
:param E_gt: range of the affiliation zones
:return: a list of list of intervals (each interval represented by either
a couple or None for empty interval). The outer list is indexed by each
affiliation zone of `E_gt`. The inner list is indexed by the events of `Is`.
"""
out = [None] * len(E_gt)
for j in range(len(E_gt)):
E_gt_j = E_gt[j]
discarded_idx_before = [I[1] < E_gt_j[0] for I in Is] # end point of predicted I is before the begin of E
discarded_idx_after = [I[0] > E_gt_j[1] for I in Is] # start of predicted I is after the end of E
kept_index = [not(a or b) for a, b in zip(discarded_idx_before, discarded_idx_after)]
Is_j = [x for x, y in zip(Is, kept_index)]
out[j] = [interval_intersection(I, E_gt[j]) for I in Is_j]
return(out)
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/affiliation/_integral_interval.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import math
from other_anomaly_baselines.metrics.affiliation.generics import _sum_wo_nan
"""
In order to shorten the length of the variables,
the general convention in this file is to let:
- I for a predicted event (start, stop),
- Is for a list of predicted events,
- J for a ground truth event,
- Js for a list of ground truth events.
"""
def interval_length(J = (1,2)):
"""
Length of an interval
:param J: couple representating the start and stop of an interval, or None
:return: length of the interval, and 0 for a None interval
"""
if J is None:
return(0)
return(J[1] - J[0])
def sum_interval_lengths(Is = [(1,2),(3,4),(5,6)]):
"""
Sum of length of the intervals
:param Is: list of intervals represented by starts and stops
:return: sum of the interval length
"""
return(sum([interval_length(I) for I in Is]))
def interval_intersection(I = (1, 3), J = (2, 4)):
"""
Intersection between two intervals I and J
I and J should be either empty or represent a positive interval (no point)
:param I: an interval represented by start and stop
:param J: a second interval of the same form
:return: an interval representing the start and stop of the intersection (or None if empty)
"""
if I is None:
return(None)
if J is None:
return(None)
I_inter_J = (max(I[0], J[0]), min(I[1], J[1]))
if I_inter_J[0] >= I_inter_J[1]:
return(None)
else:
return(I_inter_J)
def interval_subset(I = (1, 3), J = (0, 6)):
"""
Checks whether I is a subset of J
:param I: an non empty interval represented by start and stop
:param J: a second non empty interval of the same form
:return: True if I is a subset of J
"""
if (I[0] >= J[0]) and (I[1] <= J[1]):
return True
else:
return False
def cut_into_three_func(I, J):
"""
Cut an interval I into a partition of 3 subsets:
the elements before J,
the elements belonging to J,
and the elements after J
:param I: an interval represented by start and stop, or None for an empty one
:param J: a non empty interval
:return: a triplet of three intervals, each represented by either (start, stop) or None
"""
if I is None:
return((None, None, None))
I_inter_J = interval_intersection(I, J)
if I == I_inter_J:
I_before = None
I_after = None
elif I[1] <= J[0]:
I_before = I
I_after = None
elif I[0] >= J[1]:
I_before = None
I_after = I
elif (I[0] <= J[0]) and (I[1] >= J[1]):
I_before = (I[0], I_inter_J[0])
I_after = (I_inter_J[1], I[1])
elif I[0] <= J[0]:
I_before = (I[0], I_inter_J[0])
I_after = None
elif I[1] >= J[1]:
I_before = None
I_after = (I_inter_J[1], I[1])
else:
raise ValueError('unexpected unconsidered case')
return(I_before, I_inter_J, I_after)
def get_pivot_j(I, J):
"""
Get the single point of J that is the closest to I, called 'pivot' here,
with the requirement that I should be outside J
:param I: a non empty interval (start, stop)
:param J: another non empty interval, with empty intersection with I
:return: the element j of J that is the closest to I
"""
if interval_intersection(I, J) is not None:
raise ValueError('I and J should have a void intersection')
j_pivot = None # j_pivot is a border of J
if max(I) <= min(J):
j_pivot = min(J)
elif min(I) >= max(J):
j_pivot = max(J)
else:
raise ValueError('I should be outside J')
return(j_pivot)
def integral_mini_interval(I, J):
"""
In the specific case where interval I is located outside J,
integral of distance from x to J over the interval x \in I.
This is the *integral* i.e. the sum.
It's not the mean (not divided by the length of I yet)
:param I: a interval (start, stop), or None
:param J: a non empty interval, with empty intersection with I
:return: the integral of distances d(x, J) over x \in I
"""
if I is None:
return(0)
j_pivot = get_pivot_j(I, J)
a = min(I)
b = max(I)
return((b-a)*abs((j_pivot - (a+b)/2)))
def integral_interval_distance(I, J):
"""
For any non empty intervals I, J, compute the
integral of distance from x to J over the interval x \in I.
This is the *integral* i.e. the sum.
It's not the mean (not divided by the length of I yet)
The interval I can intersect J or not
:param I: a interval (start, stop), or None
:param J: a non empty interval
:return: the integral of distances d(x, J) over x \in I
"""
# I and J are single intervals (not generic sets)
# I is a predicted interval in the range of affiliation of J
def f(I_cut):
return(integral_mini_interval(I_cut, J))
# If I_middle is fully included into J, it is
# the distance to J is always 0
def f0(I_middle):
return(0)
cut_into_three = cut_into_three_func(I, J)
# Distance for now, not the mean:
# Distance left: Between cut_into_three[0] and the point min(J)
d_left = f(cut_into_three[0])
# Distance middle: Between cut_into_three[1] = I inter J, and J
d_middle = f0(cut_into_three[1])
# Distance right: Between cut_into_three[2] and the point max(J)
d_right = f(cut_into_three[2])
# It's an integral so summable
return(d_left + d_middle + d_right)
def integral_mini_interval_P_CDFmethod__min_piece(I, J, E):
"""
Helper of `integral_mini_interval_Pprecision_CDFmethod`
In the specific case where interval I is located outside J,
compute the integral $\int_{d_min}^{d_max} \min(m, x) dx$, with:
- m the smallest distance from J to E,
- d_min the smallest distance d(x, J) from x \in I to J
- d_max the largest distance d(x, J) from x \in I to J
:param I: a single predicted interval, a non empty interval (start, stop)
:param J: ground truth interval, a non empty interval, with empty intersection with I
:param E: the affiliation/influence zone for J, represented as a couple (start, stop)
:return: the integral $\int_{d_min}^{d_max} \min(m, x) dx$
"""
if interval_intersection(I, J) is not None:
raise ValueError('I and J should have a void intersection')
if not interval_subset(J, E):
raise ValueError('J should be included in E')
if not interval_subset(I, E):
raise ValueError('I should be included in E')
e_min = min(E)
j_min = min(J)
j_max = max(J)
e_max = max(E)
i_min = min(I)
i_max = max(I)
d_min = max(i_min - j_max, j_min - i_max)
d_max = max(i_max - j_max, j_min - i_min)
m = min(j_min - e_min, e_max - j_max)
A = min(d_max, m)**2 - min(d_min, m)**2
B = max(d_max, m) - max(d_min, m)
C = (1/2)*A + m*B
return(C)
def integral_mini_interval_Pprecision_CDFmethod(I, J, E):
"""
Integral of the probability of distances over the interval I.
In the specific case where interval I is located outside J,
compute the integral $\int_{x \in I} Fbar(dist(x,J)) dx$.
This is the *integral* i.e. the sum (not the mean)
:param I: a single predicted interval, a non empty interval (start, stop)
:param J: ground truth interval, a non empty interval, with empty intersection with I
:param E: the affiliation/influence zone for J, represented as a couple (start, stop)
:return: the integral $\int_{x \in I} Fbar(dist(x,J)) dx$
"""
integral_min_piece = integral_mini_interval_P_CDFmethod__min_piece(I, J, E)
e_min = min(E)
j_min = min(J)
j_max = max(J)
e_max = max(E)
i_min = min(I)
i_max = max(I)
d_min = max(i_min - j_max, j_min - i_max)
d_max = max(i_max - j_max, j_min - i_min)
integral_linear_piece = (1/2)*(d_max**2 - d_min**2)
integral_remaining_piece = (j_max - j_min)*(i_max - i_min)
DeltaI = i_max - i_min
DeltaE = e_max - e_min
output = DeltaI - (1/DeltaE)*(integral_min_piece + integral_linear_piece + integral_remaining_piece)
return(output)
def integral_interval_probaCDF_precision(I, J, E):
"""
Integral of the probability of distances over the interval I.
Compute the integral $\int_{x \in I} Fbar(dist(x,J)) dx$.
This is the *integral* i.e. the sum (not the mean)
:param I: a single (non empty) predicted interval in the zone of affiliation of J
:param J: ground truth interval
:param E: affiliation/influence zone for J
:return: the integral $\int_{x \in I} Fbar(dist(x,J)) dx$
"""
# I and J are single intervals (not generic sets)
def f(I_cut):
if I_cut is None:
return(0)
else:
return(integral_mini_interval_Pprecision_CDFmethod(I_cut, J, E))
# If I_middle is fully included into J, it is
# integral of 1 on the interval I_middle, so it's |I_middle|
def f0(I_middle):
if I_middle is None:
return(0)
else:
return(max(I_middle) - min(I_middle))
cut_into_three = cut_into_three_func(I, J)
# Distance for now, not the mean:
# Distance left: Between cut_into_three[0] and the point min(J)
d_left = f(cut_into_three[0])
# Distance middle: Between cut_into_three[1] = I inter J, and J
d_middle = f0(cut_into_three[1])
# Distance right: Between cut_into_three[2] and the point max(J)
d_right = f(cut_into_three[2])
# It's an integral so summable
return(d_left + d_middle + d_right)
def cut_J_based_on_mean_func(J, e_mean):
"""
Helper function for the recall.
Partition J into two intervals: before and after e_mean
(e_mean represents the center element of E the zone of affiliation)
:param J: ground truth interval
:param e_mean: a float number (center value of E)
:return: a couple partitionning J into (J_before, J_after)
"""
if J is None:
J_before = None
J_after = None
elif e_mean >= max(J):
J_before = J
J_after = None
elif e_mean <= min(J):
J_before = None
J_after = J
else: # e_mean is across J
J_before = (min(J), e_mean)
J_after = (e_mean, max(J))
return((J_before, J_after))
def integral_mini_interval_Precall_CDFmethod(I, J, E):
"""
Integral of the probability of distances over the interval J.
In the specific case where interval J is located outside I,
compute the integral $\int_{y \in J} Fbar_y(dist(y,I)) dy$.
This is the *integral* i.e. the sum (not the mean)
:param I: a single (non empty) predicted interval
:param J: ground truth (non empty) interval, with empty intersection with I
:param E: the affiliation/influence zone for J, represented as a couple (start, stop)
:return: the integral $\int_{y \in J} Fbar_y(dist(y,I)) dy$
"""
# The interval J should be located outside I
# (so it's either the left piece or the right piece w.r.t I)
i_pivot = get_pivot_j(J, I)
e_min = min(E)
e_max = max(E)
e_mean = (e_min + e_max) / 2
# If i_pivot is outside E (it's possible), then
# the distance is worst that any random element within E,
# so we set the recall to 0
if i_pivot <= min(E):
return(0)
elif i_pivot >= max(E):
return(0)
# Otherwise, we have at least i_pivot in E and so d < M so min(d,M)=d
cut_J_based_on_e_mean = cut_J_based_on_mean_func(J, e_mean)
J_before = cut_J_based_on_e_mean[0]
J_after = cut_J_based_on_e_mean[1]
iemin_mean = (e_min + i_pivot)/2
cut_Jbefore_based_on_iemin_mean = cut_J_based_on_mean_func(J_before, iemin_mean)
J_before_closeE = cut_Jbefore_based_on_iemin_mean[0] # before e_mean and closer to e_min than i_pivot ~ J_before_before
J_before_closeI = cut_Jbefore_based_on_iemin_mean[1] # before e_mean and closer to i_pivot than e_min ~ J_before_after
iemax_mean = (e_max + i_pivot)/2
cut_Jafter_based_on_iemax_mean = cut_J_based_on_mean_func(J_after, iemax_mean)
J_after_closeI = cut_Jafter_based_on_iemax_mean[0] # after e_mean and closer to i_pivot than e_max ~ J_after_before
J_after_closeE = cut_Jafter_based_on_iemax_mean[1] # after e_mean and closer to e_max than i_pivot ~ J_after_after
if J_before_closeE is not None:
j_before_before_min = min(J_before_closeE) # == min(J)
j_before_before_max = max(J_before_closeE)
else:
j_before_before_min = math.nan
j_before_before_max = math.nan
if J_before_closeI is not None:
j_before_after_min = min(J_before_closeI) # == j_before_before_max if existing
j_before_after_max = max(J_before_closeI) # == max(J_before)
else:
j_before_after_min = math.nan
j_before_after_max = math.nan
if J_after_closeI is not None:
j_after_before_min = min(J_after_closeI) # == min(J_after)
j_after_before_max = max(J_after_closeI)
else:
j_after_before_min = math.nan
j_after_before_max = math.nan
if J_after_closeE is not None:
j_after_after_min = min(J_after_closeE) # == j_after_before_max if existing
j_after_after_max = max(J_after_closeE) # == max(J)
else:
j_after_after_min = math.nan
j_after_after_max = math.nan
# <-- J_before_closeE --> <-- J_before_closeI --> <-- J_after_closeI --> <-- J_after_closeE -->
# j_bb_min j_bb_max j_ba_min j_ba_max j_ab_min j_ab_max j_aa_min j_aa_max
# (with `b` for before and `a` for after in the previous variable names)
# vs e_mean m = min(t-e_min, e_max-t) d=|i_pivot-t| min(d,m) \int min(d,m)dt \int d dt \int_(min(d,m)+d)dt \int_{t \in J}(min(d,m)+d)dt
# Case J_before_closeE & i_pivot after J before t-e_min i_pivot-t min(i_pivot-t,t-e_min) = t-e_min t^2/2-e_min*t i_pivot*t-t^2/2 t^2/2-e_min*t+i_pivot*t-t^2/2 = (i_pivot-e_min)*t (i_pivot-e_min)*tB - (i_pivot-e_min)*tA = (i_pivot-e_min)*(tB-tA)
# Case J_before_closeI & i_pivot after J before t-e_min i_pivot-t min(i_pivot-t,t-e_min) = i_pivot-t i_pivot*t-t^2/2 i_pivot*t-t^2/2 i_pivot*t-t^2/2+i_pivot*t-t^2/2 = 2*i_pivot*t-t^2 2*i_pivot*tB-tB^2 - 2*i_pivot*tA + tA^2 = 2*i_pivot*(tB-tA) - (tB^2 - tA^2)
# Case J_after_closeI & i_pivot after J after e_max-t i_pivot-t min(i_pivot-t,e_max-t) = i_pivot-t i_pivot*t-t^2/2 i_pivot*t-t^2/2 i_pivot*t-t^2/2+i_pivot*t-t^2/2 = 2*i_pivot*t-t^2 2*i_pivot*tB-tB^2 - 2*i_pivot*tA + tA^2 = 2*i_pivot*(tB-tA) - (tB^2 - tA^2)
# Case J_after_closeE & i_pivot after J after e_max-t i_pivot-t min(i_pivot-t,e_max-t) = e_max-t e_max*t-t^2/2 i_pivot*t-t^2/2 e_max*t-t^2/2+i_pivot*t-t^2/2 = (e_max+i_pivot)*t-t^2 (e_max+i_pivot)*tB-tB^2 - (e_max+i_pivot)*tA + tA^2 = (e_max+i_pivot)*(tB-tA) - (tB^2 - tA^2)
#
# Case J_before_closeE & i_pivot before J before t-e_min t-i_pivot min(t-i_pivot,t-e_min) = t-e_min t^2/2-e_min*t t^2/2-i_pivot*t t^2/2-e_min*t+t^2/2-i_pivot*t = t^2-(e_min+i_pivot)*t tB^2-(e_min+i_pivot)*tB - tA^2 + (e_min+i_pivot)*tA = (tB^2 - tA^2) - (e_min+i_pivot)*(tB-tA)
# Case J_before_closeI & i_pivot before J before t-e_min t-i_pivot min(t-i_pivot,t-e_min) = t-i_pivot t^2/2-i_pivot*t t^2/2-i_pivot*t t^2/2-i_pivot*t+t^2/2-i_pivot*t = t^2-2*i_pivot*t tB^2-2*i_pivot*tB - tA^2 + 2*i_pivot*tA = (tB^2 - tA^2) - 2*i_pivot*(tB-tA)
# Case J_after_closeI & i_pivot before J after e_max-t t-i_pivot min(t-i_pivot,e_max-t) = t-i_pivot t^2/2-i_pivot*t t^2/2-i_pivot*t t^2/2-i_pivot*t+t^2/2-i_pivot*t = t^2-2*i_pivot*t tB^2-2*i_pivot*tB - tA^2 + 2*i_pivot*tA = (tB^2 - tA^2) - 2*i_pivot*(tB-tA)
# Case J_after_closeE & i_pivot before J after e_max-t t-i_pivot min(t-i_pivot,e_max-t) = e_max-t e_max*t-t^2/2 t^2/2-i_pivot*t e_max*t-t^2/2+t^2/2-i_pivot*t = (e_max-i_pivot)*t (e_max-i_pivot)*tB - (e_max-i_pivot)*tA = (e_max-i_pivot)*(tB-tA)
if i_pivot >= max(J):
part1_before_closeE = (i_pivot-e_min)*(j_before_before_max - j_before_before_min) # (i_pivot-e_min)*(tB-tA) # j_before_before_max - j_before_before_min
part2_before_closeI = 2*i_pivot*(j_before_after_max-j_before_after_min) - (j_before_after_max**2 - j_before_after_min**2) # 2*i_pivot*(tB-tA) - (tB^2 - tA^2) # j_before_after_max - j_before_after_min
part3_after_closeI = 2*i_pivot*(j_after_before_max-j_after_before_min) - (j_after_before_max**2 - j_after_before_min**2) # 2*i_pivot*(tB-tA) - (tB^2 - tA^2) # j_after_before_max - j_after_before_min
part4_after_closeE = (e_max+i_pivot)*(j_after_after_max-j_after_after_min) - (j_after_after_max**2 - j_after_after_min**2) # (e_max+i_pivot)*(tB-tA) - (tB^2 - tA^2) # j_after_after_max - j_after_after_min
out_parts = [part1_before_closeE, part2_before_closeI, part3_after_closeI, part4_after_closeE]
elif i_pivot <= min(J):
part1_before_closeE = (j_before_before_max**2 - j_before_before_min**2) - (e_min+i_pivot)*(j_before_before_max-j_before_before_min) # (tB^2 - tA^2) - (e_min+i_pivot)*(tB-tA) # j_before_before_max - j_before_before_min
part2_before_closeI = (j_before_after_max**2 - j_before_after_min**2) - 2*i_pivot*(j_before_after_max-j_before_after_min) # (tB^2 - tA^2) - 2*i_pivot*(tB-tA) # j_before_after_max - j_before_after_min
part3_after_closeI = (j_after_before_max**2 - j_after_before_min**2) - 2*i_pivot*(j_after_before_max - j_after_before_min) # (tB^2 - tA^2) - 2*i_pivot*(tB-tA) # j_after_before_max - j_after_before_min
part4_after_closeE = (e_max-i_pivot)*(j_after_after_max - j_after_after_min) # (e_max-i_pivot)*(tB-tA) # j_after_after_max - j_after_after_min
out_parts = [part1_before_closeE, part2_before_closeI, part3_after_closeI, part4_after_closeE]
else:
raise ValueError('The i_pivot should be outside J')
out_integral_min_dm_plus_d = _sum_wo_nan(out_parts) # integral on all J, i.e. sum of the disjoint parts
# We have for each point t of J:
# \bar{F}_{t, recall}(d) = 1 - (1/|E|) * (min(d,m) + d)
# Since t is a single-point here, and we are in the case where i_pivot is inside E.
# The integral is then given by:
# C = \int_{t \in J} \bar{F}_{t, recall}(D(t)) dt
# = \int_{t \in J} 1 - (1/|E|) * (min(d,m) + d) dt
# = |J| - (1/|E|) * [\int_{t \in J} (min(d,m) + d) dt]
# = |J| - (1/|E|) * out_integral_min_dm_plus_d
DeltaJ = max(J) - min(J)
DeltaE = max(E) - min(E)
C = DeltaJ - (1/DeltaE) * out_integral_min_dm_plus_d
return(C)
def integral_interval_probaCDF_recall(I, J, E):
"""
Integral of the probability of distances over the interval J.
Compute the integral $\int_{y \in J} Fbar_y(dist(y,I)) dy$.
This is the *integral* i.e. the sum (not the mean)
:param I: a single (non empty) predicted interval
:param J: ground truth (non empty) interval
:param E: the affiliation/influence zone for J
:return: the integral $\int_{y \in J} Fbar_y(dist(y,I)) dy$
"""
# I and J are single intervals (not generic sets)
# E is the outside affiliation interval of J (even for recall!)
# (in particular J \subset E)
#
# J is the portion of the ground truth affiliated to I
# I is a predicted interval (can be outside E possibly since it's recall)
def f(J_cut):
if J_cut is None:
return(0)
else:
return integral_mini_interval_Precall_CDFmethod(I, J_cut, E)
# If J_middle is fully included into I, it is
# integral of 1 on the interval J_middle, so it's |J_middle|
def f0(J_middle):
if J_middle is None:
return(0)
else:
return(max(J_middle) - min(J_middle))
cut_into_three = cut_into_three_func(J, I) # it's J that we cut into 3, depending on the position w.r.t I
# since we integrate over J this time.
#
# Distance for now, not the mean:
# Distance left: Between cut_into_three[0] and the point min(I)
d_left = f(cut_into_three[0])
# Distance middle: Between cut_into_three[1] = J inter I, and I
d_middle = f0(cut_into_three[1])
# Distance right: Between cut_into_three[2] and the point max(I)
d_right = f(cut_into_three[2])
# It's an integral so summable
return(d_left + d_middle + d_right)
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/affiliation/_single_ground_truth_event.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import math
from other_anomaly_baselines.metrics.affiliation._affiliation_zone import (
get_all_E_gt_func,
affiliation_partition)
from other_anomaly_baselines.metrics.affiliation._integral_interval import (
integral_interval_distance,
integral_interval_probaCDF_precision,
integral_interval_probaCDF_recall,
interval_length,
sum_interval_lengths)
def affiliation_precision_distance(Is = [(1,2),(3,4),(5,6)], J = (2,5.5)):
"""
Compute the individual average distance from Is to a single ground truth J
:param Is: list of predicted events within the affiliation zone of J
:param J: couple representating the start and stop of a ground truth interval
:return: individual average precision directed distance number
"""
if all([I is None for I in Is]): # no prediction in the current area
return(math.nan) # undefined
return(sum([integral_interval_distance(I, J) for I in Is]) / sum_interval_lengths(Is))
def affiliation_precision_proba(Is = [(1,2),(3,4),(5,6)], J = (2,5.5), E = (0,8)):
"""
Compute the individual precision probability from Is to a single ground truth J
:param Is: list of predicted events within the affiliation zone of J
:param J: couple representating the start and stop of a ground truth interval
:param E: couple representing the start and stop of the zone of affiliation of J
:return: individual precision probability in [0, 1], or math.nan if undefined
"""
if all([I is None for I in Is]): # no prediction in the current area
return(math.nan) # undefined
return(sum([integral_interval_probaCDF_precision(I, J, E) for I in Is]) / sum_interval_lengths(Is))
def affiliation_recall_distance(Is = [(1,2),(3,4),(5,6)], J = (2,5.5)):
"""
Compute the individual average distance from a single J to the predictions Is
:param Is: list of predicted events within the affiliation zone of J
:param J: couple representating the start and stop of a ground truth interval
:return: individual average recall directed distance number
"""
Is = [I for I in Is if I is not None] # filter possible None in Is
if len(Is) == 0: # there is no prediction in the current area
return(math.inf)
E_gt_recall = get_all_E_gt_func(Is, (-math.inf, math.inf)) # here from the point of view of the predictions
Js = affiliation_partition([J], E_gt_recall) # partition of J depending of proximity with Is
return(sum([integral_interval_distance(J[0], I) for I, J in zip(Is, Js)]) / interval_length(J))
def affiliation_recall_proba(Is = [(1,2),(3,4),(5,6)], J = (2,5.5), E = (0,8)):
"""
Compute the individual recall probability from a single ground truth J to Is
:param Is: list of predicted events within the affiliation zone of J
:param J: couple representating the start and stop of a ground truth interval
:param E: couple representing the start and stop of the zone of affiliation of J
:return: individual recall probability in [0, 1]
"""
Is = [I for I in Is if I is not None] # filter possible None in Is
if len(Is) == 0: # there is no prediction in the current area
return(0)
E_gt_recall = get_all_E_gt_func(Is, E) # here from the point of view of the predictions
Js = affiliation_partition([J], E_gt_recall) # partition of J depending of proximity with Is
return(sum([integral_interval_probaCDF_recall(I, J[0], E) for I, J in zip(Is, Js)]) / interval_length(J))
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/affiliation/generics.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from itertools import groupby
from operator import itemgetter
import math
import gzip
import glob
import os
def convert_vector_to_events(vector = [0, 1, 1, 0, 0, 1, 0]):
"""
Convert a binary vector (indicating 1 for the anomalous instances)
to a list of events. The events are considered as durations,
i.e. setting 1 at index i corresponds to an anomalous interval [i, i+1).
:param vector: a list of elements belonging to {0, 1}
:return: a list of couples, each couple representing the start and stop of
each event
"""
positive_indexes = [idx for idx, val in enumerate(vector) if val > 0]
events = []
for k, g in groupby(enumerate(positive_indexes), lambda ix : ix[0] - ix[1]):
cur_cut = list(map(itemgetter(1), g))
events.append((cur_cut[0], cur_cut[-1]))
# Consistent conversion in case of range anomalies (for indexes):
# A positive index i is considered as the interval [i, i+1),
# so the last index should be moved by 1
events = [(x, y+1) for (x,y) in events]
# print("events = ", events)
return(events)
def infer_Trange(events_pred, events_gt):
"""
Given the list of events events_pred and events_gt, get the
smallest possible Trange corresponding to the start and stop indexes
of the whole series.
Trange will not influence the measure of distances, but will impact the
measures of probabilities.
:param events_pred: a list of couples corresponding to predicted events
:param events_gt: a list of couples corresponding to ground truth events
:return: a couple corresponding to the smallest range containing the events
"""
if len(events_gt) == 0:
raise ValueError('The gt events should contain at least one event')
if len(events_pred) == 0:
# empty prediction, base Trange only on events_gt (which is non empty)
return(infer_Trange(events_gt, events_gt))
min_pred = min([x[0] for x in events_pred])
min_gt = min([x[0] for x in events_gt])
max_pred = max([x[1] for x in events_pred])
max_gt = max([x[1] for x in events_gt])
Trange = (min(min_pred, min_gt), max(max_pred, max_gt))
return(Trange)
def has_point_anomalies(events):
"""
Checking whether events contain point anomalies, i.e.
events starting and stopping at the same time.
:param events: a list of couples corresponding to predicted events
:return: True is the events have any point anomalies, False otherwise
"""
if len(events) == 0:
return(False)
return(min([x[1] - x[0] for x in events]) == 0)
def _sum_wo_nan(vec):
"""
Sum of elements, ignoring math.isnan ones
:param vec: vector of floating numbers
:return: sum of the elements, ignoring math.isnan ones
"""
vec_wo_nan = [e for e in vec if not math.isnan(e)]
return(sum(vec_wo_nan))
def _len_wo_nan(vec):
"""
Count of elements, ignoring math.isnan ones
:param vec: vector of floating numbers
:return: count of the elements, ignoring math.isnan ones
"""
vec_wo_nan = [e for e in vec if not math.isnan(e)]
return(len(vec_wo_nan))
def read_gz_data(filename = 'data/machinetemp_groundtruth.gz'):
"""
Load a file compressed with gz, such that each line of the
file is either 0 (representing a normal instance) or 1 (representing)
an anomalous instance.
:param filename: file path to the gz compressed file
:return: list of integers with either 0 or 1
"""
with gzip.open(filename, 'rb') as f:
content = f.read().splitlines()
content = [int(x) for x in content]
return(content)
def read_all_as_events():
"""
Load the files contained in the folder `data/` and convert
to events. The length of the series is kept.
The convention for the file name is: `dataset_algorithm.gz`
:return: two dictionaries:
- the first containing the list of events for each dataset and algorithm,
- the second containing the range of the series for each dataset
"""
filepaths = glob.glob('data/*.gz')
datasets = dict()
Tranges = dict()
for filepath in filepaths:
vector = read_gz_data(filepath)
events = convert_vector_to_events(vector)
# ad hoc cut for those files
cut_filepath = (os.path.split(filepath)[1]).split('_')
data_name = cut_filepath[0]
algo_name = (cut_filepath[1]).split('.')[0]
if not data_name in datasets:
datasets[data_name] = dict()
Tranges[data_name] = (0, len(vector))
datasets[data_name][algo_name] = events
return(datasets, Tranges)
def f1_func(p, r):
"""
Compute the f1 function
:param p: precision numeric value
:param r: recall numeric value
:return: f1 numeric value
"""
return(2*p*r/(p+r))
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/affiliation/metrics.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from other_anomaly_baselines.metrics.affiliation.generics import (
infer_Trange,
has_point_anomalies,
_len_wo_nan,
_sum_wo_nan,
read_all_as_events)
from other_anomaly_baselines.metrics.affiliation._affiliation_zone import (
get_all_E_gt_func,
affiliation_partition)
from other_anomaly_baselines.metrics.affiliation._single_ground_truth_event import (
affiliation_precision_distance,
affiliation_recall_distance,
affiliation_precision_proba,
affiliation_recall_proba)
def test_events(events):
"""
Verify the validity of the input events
:param events: list of events, each represented by a couple (start, stop)
:return: None. Raise an error for incorrect formed or non ordered events
"""
if type(events) is not list:
raise TypeError('Input `events` should be a list of couples')
if not all([type(x) is tuple for x in events]):
raise TypeError('Input `events` should be a list of tuples')
if not all([len(x) == 2 for x in events]):
raise ValueError('Input `events` should be a list of couples (start, stop)')
if not all([x[0] <= x[1] for x in events]):
raise ValueError('Input `events` should be a list of couples (start, stop) with start <= stop')
if not all([events[i][1] < events[i+1][0] for i in range(len(events) - 1)]):
raise ValueError('Couples of input `events` should be disjoint and ordered')
def pr_from_events(events_pred, events_gt, Trange):
"""
Compute the affiliation metrics including the precision/recall in [0,1],
along with the individual precision/recall distances and probabilities
:param events_pred: list of predicted events, each represented by a couple
indicating the start and the stop of the event
:param events_gt: list of ground truth events, each represented by a couple
indicating the start and the stop of the event
:param Trange: range of the series where events_pred and events_gt are included,
represented as a couple (start, stop)
:return: dictionary with precision, recall, and the individual metrics
"""
# testing the inputs
test_events(events_pred)
test_events(events_gt)
# other tests
minimal_Trange = infer_Trange(events_pred, events_gt)
if not Trange[0] <= minimal_Trange[0]:
raise ValueError('`Trange` should include all the events')
if not minimal_Trange[1] <= Trange[1]:
raise ValueError('`Trange` should include all the events')
if len(events_gt) == 0:
raise ValueError('Input `events_gt` should have at least one event')
if has_point_anomalies(events_pred) or has_point_anomalies(events_gt):
raise ValueError('Cannot manage point anomalies currently')
if Trange is None:
# Set as default, but Trange should be indicated if probabilities are used
raise ValueError('Trange should be indicated (or inferred with the `infer_Trange` function')
E_gt = get_all_E_gt_func(events_gt, Trange)
aff_partition = affiliation_partition(events_pred, E_gt)
# Computing precision distance
d_precision = [affiliation_precision_distance(Is, J) for Is, J in zip(aff_partition, events_gt)]
# Computing recall distance
d_recall = [affiliation_recall_distance(Is, J) for Is, J in zip(aff_partition, events_gt)]
# Computing precision
p_precision = [affiliation_precision_proba(Is, J, E) for Is, J, E in zip(aff_partition, events_gt, E_gt)]
# Computing recall
p_recall = [affiliation_recall_proba(Is, J, E) for Is, J, E in zip(aff_partition, events_gt, E_gt)]
if _len_wo_nan(p_precision) > 0:
p_precision_average = _sum_wo_nan(p_precision) / _len_wo_nan(p_precision)
else:
p_precision_average = p_precision[0] # math.nan
p_recall_average = sum(p_recall) / len(p_recall)
dict_out = dict({'precision': p_precision_average,
'recall': p_recall_average,
'individual_precision_probabilities': p_precision,
'individual_recall_probabilities': p_recall,
'individual_precision_distances': d_precision,
'individual_recall_distances': d_recall})
return(dict_out)
def produce_all_results():
"""
Produce the affiliation precision/recall for all files
contained in the `data` repository
:return: a dictionary indexed by data names, each containing a dictionary
indexed by algorithm names, each containing the results of the affiliation
metrics (precision, recall, individual probabilities and distances)
"""
datasets, Tranges = read_all_as_events() # read all the events in folder `data`
results = dict()
for data_name in datasets.keys():
results_data = dict()
for algo_name in datasets[data_name].keys():
if algo_name != 'groundtruth':
results_data[algo_name] = pr_from_events(datasets[data_name][algo_name],
datasets[data_name]['groundtruth'],
Tranges[data_name])
results[data_name] = results_data
return(results)
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/combine_all_scores.py
================================================
from f1_score_f1_pa import *
from fc_score import *
from precision_at_k import *
from customizable_f1_score import *
from AUC import *
from Matthews_correlation_coefficient import *
from affiliation.generics import convert_vector_to_events
from affiliation.metrics import pr_from_events
from vus.models.feature import Window
from vus.metrics import get_range_vus_roc
def combine_all_evaluation_scores(y_test, pred_labels, anomaly_scores):
events_pred = convert_vector_to_events(y_test) # [(4, 5), (8, 9)]
events_gt = convert_vector_to_events(pred_labels) # [(3, 4), (7, 10)]
Trange = (0, len(y_test))
affiliation = pr_from_events(events_pred, events_gt, Trange)
true_events = get_events(y_test)
_, _, _, f1_score_ori, f05_score_ori = get_accuracy_precision_recall_fscore(y_test, pred_labels)
f1_score_pa = get_point_adjust_scores(y_test, pred_labels, true_events)[5]
pa_accuracy, pa_precision, pa_recall, pa_f_score = get_adjust_F1PA(y_test, pred_labels)
range_f_score = customizable_f1_score(y_test, pred_labels)
_, _, f1_score_c = get_composite_fscore_raw(y_test, pred_labels, true_events, return_prec_rec=True)
precision_k = precision_at_k(y_test, anomaly_scores, pred_labels)
point_auc = point_wise_AUC(pred_labels, y_test)
range_auc = Range_AUC(pred_labels, y_test)
MCC_score = MCC(y_test, pred_labels)
results = get_range_vus_roc(y_test, pred_labels, 100) # slidingWindow = 100 default
score_list = {"f1_score_ori": f1_score_ori,
"f05_score_ori" : f05_score_ori,
"f1_score_pa": f1_score_pa,
"pa_accuracy":pa_accuracy,
"pa_precision":pa_precision,
"pa_recall":pa_recall,
"pa_f_score":pa_f_score,
"range_f_score": range_f_score,
"f1_score_c": f1_score_c,
"precision_k": precision_k,
"point_auc": point_auc,
"range_auc": range_auc,
"MCC_score":MCC_score,
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": results["R_AUC_ROC"],
"R_AUC_PR": results["R_AUC_PR"],
"VUS_ROC": results["VUS_ROC"],
"VUS_PR": results["VUS_PR"]}
return score_list
def main():
y_test = np.zeros(100)
y_test[10:20] = 1
y_test[50:60] = 1
pred_labels = np.zeros(100)
pred_labels[15:17] = 1
pred_labels[55:62] = 1
anomaly_scores = np.zeros(100)
anomaly_scores[15:17] = 0.7
anomaly_scores[55:62] = 0.6
pred_labels[51:55] = 1
true_events = get_events(y_test)
scores = combine_all_evaluation_scores(y_test, pred_labels, anomaly_scores)
# scores = test(y_test, pred_labels)
for key,value in scores.items():
print(key,' : ',value)
if __name__ == "__main__":
main()
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/customizable_f1_score.py
================================================
# used by paper: Exathlon: A Benchmark for Explainable Anomaly Detection over Time Series_VLDB 2021
# github: https://github.com/exathlonbenchmark/exathlon
import numpy as np
from other_anomaly_baselines.metrics.evaluate_utils import range_convers_new
# the existence reward on the bias
def b(bias, i, length):
if bias == 'flat':
return 1
elif bias == 'front-end bias':
return length - i + 1
elif bias == 'back-end bias':
return i
else:
if i <= length / 2:
return i
else:
return length - i + 1
def w(AnomalyRange, p):
MyValue = 0
MaxValue = 0
start = AnomalyRange[0]
AnomalyLength = AnomalyRange[1] - AnomalyRange[0] + 1
# flat/'front-end bias'/'back-end bias'
bias = 'flat'
for i in range(start, start + AnomalyLength):
bi = b(bias, i, AnomalyLength)
MaxValue += bi
if i in p:
MyValue += bi
return MyValue / MaxValue
def Cardinality_factor(Anomolyrange, Prange):
score = 0
start = Anomolyrange[0]
end = Anomolyrange[1]
for i in Prange:
if start <= i[0] <= end:
score += 1
elif i[0] <= start <= i[1]:
score += 1
elif i[0] <= end <= i[1]:
score += 1
elif start >= i[0] and end <= i[1]:
score += 1
if score == 0:
return 0
else:
return 1 / score
def existence_reward(labels, preds):
'''
labels: list of ordered pair
preds predicted data
'''
score = 0
for i in labels:
if np.sum(np.multiply(preds <= i[1], preds >= i[0])) > 0:
score += 1
return score
def range_recall_new(labels, preds, alpha):
p = np.where(preds == 1)[0] # positions of predicted label==1
range_pred = range_convers_new(preds)
range_label = range_convers_new(labels)
Nr = len(range_label) # total # of real anomaly segments
ExistenceReward = existence_reward(range_label, p)
OverlapReward = 0
for i in range_label:
OverlapReward += w(i, p) * Cardinality_factor(i, range_pred)
score = alpha * ExistenceReward + (1 - alpha) * OverlapReward
if Nr != 0:
return score / Nr, ExistenceReward / Nr, OverlapReward / Nr
else:
return 0, 0, 0
def customizable_f1_score(y_test, pred_labels, alpha=0.2):
label = y_test
preds = pred_labels
Rrecall, ExistenceReward, OverlapReward = range_recall_new(label, preds, alpha)
Rprecision = range_recall_new(preds, label, 0)[0]
if Rprecision + Rrecall == 0:
Rf = 0
else:
Rf = 2 * Rrecall * Rprecision / (Rprecision + Rrecall)
return Rf
def main():
y_test = np.zeros(100)
y_test[10:20] = 1
y_test[50:60] = 1
pred_labels = np.zeros(100)
pred_labels[15:19] = 1
pred_labels[55:62] = 1
# pred_labels[51:55] = 1
# true_events = get_events(y_test)
Rf = customizable_f1_score(y_test, pred_labels)
print("Rf: {}".format(Rf))
if __name__ == "__main__":
main()
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/evaluate_utils.py
================================================
import numpy as np
from statsmodels.tsa.stattools import acf
from scipy.signal import argrelextrema
def get_composite_fscore_from_scores(score_t_test, thres, true_events, prec_t, return_prec_rec=False):
pred_labels = score_t_test > thres
tp = np.sum([pred_labels[start:end + 1].any() for start, end in true_events.values()])
fn = len(true_events) - tp
rec_e = tp / (tp + fn)
fscore_c = 2 * rec_e * prec_t / (rec_e + prec_t)
if prec_t == 0 and rec_e == 0:
fscore_c = 0
if return_prec_rec:
return prec_t, rec_e, fscore_c
return fscore_c
class NptConfig:
def __init__(self, config_dict):
for k, v in config_dict.items():
setattr(self, k, v)
def find_length(data):
if len(data.shape) > 1:
return 0
data = data[:min(20000, len(data))]
base = 3
auto_corr = acf(data, nlags=400, fft=True)[base:]
local_max = argrelextrema(auto_corr, np.greater)[0]
try:
max_local_max = np.argmax([auto_corr[lcm] for lcm in local_max])
if local_max[max_local_max] < 3 or local_max[max_local_max] > 300:
return 125
return local_max[max_local_max] + base
except:
return 125
def range_convers_new(label):
'''
input: arrays of binary values
output: list of ordered pair [[a0,b0], [a1,b1]... ] of the inputs
'''
L = []
i = 0
j = 0
while j < len(label):
while label[i] == 0:
i += 1
if i >= len(label):
break
j = i + 1
if j >= len(label):
if j == len(label):
L.append((i, j - 1))
break
while label[j] != 0:
j += 1
if j >= len(label):
L.append((i, j - 1))
break
if j >= len(label):
break
L.append((i, j - 1))
i = j
return L
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/evaluator.py
================================================
import logging
import os
import pickle
import copy
import json
import numpy as np
import pandas as pd
from logger_configs.configurations import datasets_config, default_thres_config
from datasets.data_preprocess.dataset import get_events
from logger_configs.logger import init_logging
from src.evaluation.evaluation_utils import get_dataset_class, get_algo_class, get_chan_num, collect_eval_metrics, \
combine_entities_eval_metrics, get_dynamic_scores, get_gaussian_kernel_scores
from src.evaluation.evaluation_utils import fit_distributions, get_scores_channelwise
from src.algorithms.algorithm_utils import load_torch_algo
from src.evaluation.trainer import Trainer
def evaluate(saved_model_root, logger, thres_methods=["top_k_time", "best_f1_test"], eval_root_cause=True,
point_adjust=False, eval_R_model=True, eval_dyn=False, thres_config=None,
telem_only=True, make_plots=["prc", "score_t"], composite_best_f1=False):
seed = 42
saved_model_folders = os.listdir(saved_model_root)
saved_model_folders.sort() # Sort directories in alphabetical order
plots_dir = os.path.join(saved_model_root, "plots")
os.makedirs(plots_dir, exist_ok=True)
# Initialize dictionary structure to collect results from each entity
algo_results = {"hr_100_all": [], "hr_150_all": [], "rc_top3_all": [], "val_recons_err": [], "val_loss": [],
"std_scores_train": [], "auroc": [], "avg_prec": []}
eval_methods = ["time-wise"]
for thres_method in thres_methods + ["tail_prob", "pot"]:
algo_results[thres_method] = {"hr_100_tp": [], "hr_150_tp": [], "rc_top3_tp": [], "opt_thres": [],
"fscore_comp": [], "rec_e": []}
for eval_method in eval_methods:
algo_results[thres_method][eval_method] = {"tp": [], "fp": [], "fn": []}
algo_R_results = copy.deepcopy(algo_results)
telemanom_gauss_s_results = copy.deepcopy(algo_results)
algo_dyn_results = copy.deepcopy(algo_results)
algo_dyn_gauss_conv_results = copy.deepcopy(algo_results)
path_decomposition = os.path.normpath(saved_model_root).split(os.path.sep)
algo_name = path_decomposition[-3]
me_ds_name = path_decomposition[-4].split("_me")[0]
ds_class = get_dataset_class(me_ds_name)
rca_possible = eval_root_cause
thres_config_dict = None
for folder_name in saved_model_folders:
if ".ini" in folder_name or ".csv" in folder_name or ".pdf" in folder_name:
continue
elif os.path.split(folder_name)[-1] == "plots":
continue
# get dataset
entity = os.path.split(folder_name)[-1]
if me_ds_name in ["msl", "smd", "smap"]:
entity = entity.split("-", 1)[1]
ds_init_params = {"seed": seed, "entity": entity}
if me_ds_name == "swat-long":
ds_init_params["shorten_long"] = False
if me_ds_name == "damadics-s":
ds_init_params["drop_init_test"] = True
dataset = ds_class(**ds_init_params)
plots_name = os.path.join(plots_dir, algo_name + "_" + me_ds_name + "_" + entity + "_")
# ds_name = dataset.name
logger.info("Processing Folder name: {}, {} on me dataset {}, entity {}".format(folder_name, algo_name,
me_ds_name, entity))
if thres_config is not None:
thres_config_dict = thres_config(me_ds_name)
else:
thres_config_dict = default_thres_config
# get test scores from pkl
raw_preds_file = os.path.join(saved_model_root, folder_name, "raw_predictions")
try:
with open(raw_preds_file, 'rb') as file:
preds = pickle.load(file)
except:
logger.info("The raw predictions of %s on %s weren't found, this run can't be evaluated" % (algo_name,
me_ds_name))
return None
# Get the true labels
_, _, _, y_test = dataset.data()
true_events = get_events(y_test)
root_causes = None
if eval_root_cause:
root_causes = dataset.get_root_causes()
# Flag that indicates root cause identification evaluation is possible
rca_possible = eval_root_cause and (preds["score_tc_test"] is not None or preds["error_tc_test"] is not None) \
and root_causes is not None
# Load the predictions
score_t_test = preds["score_t_test"]
score_tc_test = preds["score_tc_test"]
error_tc_test = preds["error_tc_test"]
error_t_test = preds["error_t_test"]
score_t_train = preds["score_t_train"]
score_tc_train = preds["score_tc_train"]
error_tc_train = preds["error_tc_train"]
error_t_train = preds["error_t_train"]
recons_tc_train = preds["recons_tc_train"]
recons_tc_test = preds["recons_tc_test"]
try:
val_recons_err = np.nanmean(preds["val_recons_err"])
except:
val_recons_err = None
try:
val_loss = preds["val_loss"]
except:
val_loss = None
if telem_only and me_ds_name in ["msl", "smap"]:
if error_tc_train is not None:
error_t_train = error_tc_train[:, 0]
error_tc_train = None
if error_tc_test is not None:
error_t_test = error_tc_test[:, 0]
error_tc_test = None
if score_tc_test is not None:
score_t_test = score_tc_test[:, 0]
score_tc_test = None
eval_R = eval_R_model and (error_tc_test is not None or error_t_test is not None)
eval_dyn = eval_dyn and ((error_tc_test is not None) or
(error_t_test is not None))
# Evaluate on each entity
logger.info("Evaluating for score_t")
algo_results = collect_eval_metrics(algo_results=algo_results, score_t_test=score_t_test, y_test=y_test,
thres_methods=thres_methods, logger=logger, true_events=true_events,
rca_possible=rca_possible and (preds["score_tc_test"] is not None),
score_tc_test=score_tc_test,
root_causes=root_causes, score_t_train=score_t_train,
point_adjust=point_adjust, thres_config_dict=thres_config_dict,
eval_methods=eval_methods, make_plots=make_plots, dataset=dataset,
plots_name=plots_name + "base", composite_best_f1=composite_best_f1)
algo_results["val_recons_err"].append(val_recons_err)
algo_results["val_loss"].append(val_loss)
algo_results["std_scores_train"].append(np.std(score_t_train))
if eval_R:
if algo_name == "TelemanomAlgo":
logger.info("Evaluating for static gaussian for TelemanomAlgo")
# get static gaussian scores. This is usually done in the trainer, but not for this algo
distr_names = ["univar_gaussian"]
distr_par_file = os.path.join(saved_model_root, folder_name, "distr_parameters")
if error_t_train is None or error_tc_train is None:
score_t_test_gauss_s = error_t_test
score_t_train_gauss_s = None
score_tc_test_gauss_s = error_tc_test
else:
distr_params = fit_distributions(distr_par_file, distr_names, predictions_dic=
{"train_raw_scores": error_tc_train})[distr_names[0]]
score_t_train_gauss_s, _, score_t_test_gauss_s, score_tc_train_gauss_s, _, score_tc_test_gauss_s = \
get_scores_channelwise(distr_params, train_raw_scores=error_tc_train,
val_raw_scores=None, test_raw_scores=error_tc_test,
logcdf=True)
telemanom_gauss_s_results = collect_eval_metrics(algo_results=telemanom_gauss_s_results,
score_t_test=score_t_test_gauss_s,
y_test=y_test,
thres_methods=thres_methods,
logger=logger,
true_events=true_events,
rca_possible=rca_possible,
score_tc_test=score_tc_test_gauss_s,
root_causes=root_causes,
score_t_train=score_t_train_gauss_s,
point_adjust=point_adjust,
thres_config_dict=thres_config_dict,
eval_methods=eval_methods,
make_plots=make_plots,
dataset=dataset,
plots_name=plots_name + "-gauss-s",
composite_best_f1=composite_best_f1)
if error_tc_train is not None and error_tc_test is not None:
logger.info("Doing mean adjustment of train and test error_tc")
mean_c_train = np.mean(error_tc_train, axis=0)
error_tc_train_normed = error_tc_train - mean_c_train
error_tc_test_normed = error_tc_test - mean_c_train
error_t_train_normed = np.sqrt(np.mean(error_tc_train_normed ** 2, axis=1))
error_t_test_normed = np.sqrt(np.mean(error_tc_test_normed ** 2, axis=1))
else:
error_t_test_normed = error_t_test
error_t_train_normed = error_t_train
error_tc_test_normed = error_tc_test
error_tc_train_normed = None
logger.info("Evaluating for error_t")
algo_R_results = collect_eval_metrics(algo_results=algo_R_results, score_t_test=error_t_test_normed,
y_test=y_test,
thres_methods=thres_methods, logger=logger, true_events=true_events,
rca_possible=rca_possible, score_tc_test=error_tc_test_normed,
root_causes=root_causes, score_t_train=error_t_train_normed,
point_adjust=point_adjust,
thres_config_dict=thres_config_dict, eval_methods=eval_methods,
make_plots=make_plots, dataset=dataset,
plots_name=plots_name + "R",
composite_best_f1=composite_best_f1,
score_tc_train=error_tc_train_normed)
algo_R_results["val_recons_err"].append(val_recons_err)
algo_R_results["val_loss"].append(val_loss)
if error_t_train is not None:
algo_R_results["std_scores_train"].append(np.std(error_t_train))
# dynamic scoring function
if eval_dyn:
# dyn_thres_methods = ["best_f1_test"]
dyn_thres_methods = thres_methods
logger.info("Evaluating gaussian dynamic scoring for error_t with thres_methods {}".format(dyn_thres_methods))
long_window = thres_config_dict["dyn_gauss"]["long_window"]
short_window = thres_config_dict["dyn_gauss"]["short_window"]
if telem_only and me_ds_name in ["msl", "smap"]:
score_t_test_dyn, score_tc_test_dyn, score_t_train_dyn, score_tc_train_dyn = get_dynamic_scores(
error_tc_train=None, error_tc_test=None, error_t_train=error_t_train, error_t_test=error_t_test,
long_window=long_window, short_window=short_window)
else:
score_t_test_dyn, score_tc_test_dyn, score_t_train_dyn, score_tc_train_dyn = get_dynamic_scores(
error_tc_train, error_tc_test, error_t_train, error_t_test, long_window=long_window,
short_window=short_window)
algo_dyn_results = collect_eval_metrics(algo_results=algo_dyn_results, score_t_test=score_t_test_dyn,
y_test=y_test, thres_methods=dyn_thres_methods,
logger=logger, rca_possible=rca_possible, true_events=true_events,
score_tc_test=score_tc_test_dyn, root_causes=root_causes,
score_t_train=score_t_train_dyn, point_adjust=point_adjust,
thres_config_dict=thres_config_dict, eval_methods=eval_methods,
make_plots=make_plots, dataset=dataset,
plots_name=plots_name + "dyn",
composite_best_f1=composite_best_f1,
score_tc_train=score_tc_train_dyn)
algo_dyn_results["val_recons_err"].append(val_recons_err)
algo_dyn_results["val_loss"].append(val_loss)
if score_t_train_dyn is not None:
algo_dyn_results["std_scores_train"].append(np.std(score_t_train_dyn))
kernel_sigma = thres_config_dict["dyn_gauss"]["kernel_sigma"]
score_t_test_dyn_gauss_conv, score_tc_test_dyn_gauss_conv = get_gaussian_kernel_scores(
score_t_test_dyn, score_tc_test_dyn, kernel_sigma)
if score_t_train_dyn is not None:
score_t_train_dyn_gauss_conv, _ = get_gaussian_kernel_scores(score_t_train_dyn, score_tc_train_dyn,
kernel_sigma)
else:
score_t_train_dyn_gauss_conv = None
algo_dyn_gauss_conv_results = collect_eval_metrics(algo_results=algo_dyn_gauss_conv_results,
score_t_test=score_t_test_dyn_gauss_conv,
y_test=y_test,
thres_methods=dyn_thres_methods,
logger=logger,
rca_possible=rca_possible,
true_events=true_events,
score_tc_test=score_tc_test_dyn_gauss_conv,
root_causes=root_causes,
score_t_train=score_t_train_dyn_gauss_conv,
point_adjust=point_adjust,
thres_config_dict=thres_config_dict,
eval_methods=eval_methods,
make_plots=make_plots, dataset=dataset,
plots_name=plots_name + "dyn-gauss-conv",
composite_best_f1=composite_best_f1,
score_tc_train=None)
# Combine results from each entity
final_results, column_names = combine_entities_eval_metrics(algo_results, thres_methods, me_ds_name, algo_name,
rca_possible, eval_methods=eval_methods)
if eval_R_model:
results_R, _ = combine_entities_eval_metrics(algo_R_results, thres_methods, me_ds_name, algo_name + "-R",
rca_possible, eval_methods=eval_methods)
final_results = np.concatenate((final_results, results_R), axis=0)
if algo_name == "TelemanomAlgo":
results_telem, _ = combine_entities_eval_metrics(telemanom_gauss_s_results, thres_methods, me_ds_name,
algo_name + "-Gauss-S", rca_possible, eval_methods=eval_methods)
final_results = np.concatenate((final_results, results_telem), axis=0)
if eval_dyn:
results_dyn, _ = combine_entities_eval_metrics(algo_dyn_results, dyn_thres_methods,
me_ds_name, algo_name + "-dyn",
rca_possible, eval_methods=eval_methods)
final_results = np.concatenate((final_results, results_dyn), axis=0)
results_dyn_gauss_conv, _ = combine_entities_eval_metrics(algo_dyn_gauss_conv_results,
dyn_thres_methods,
me_ds_name, algo_name + "-dyn-gauss-conv",
rca_possible, eval_methods=eval_methods)
final_results = np.concatenate((final_results, results_dyn_gauss_conv), axis=0)
results_df = pd.DataFrame(final_results, columns=column_names)
results_df["folder_name"] = saved_model_root
new_col_order = list(results_df.columns)[:3] + ["point_adjust"] + list(results_df.columns)[3:]
results_df["point_adjust"] = point_adjust
results_df = results_df[new_col_order]
with open(os.path.join(os.path.dirname(saved_model_root), "config.json")) as file:
algo_config = json.dumps(json.load(file))
results_df["config"] = algo_config
if thres_config_dict is not None:
results_df["thres_config"] = str(thres_config_dict)
if point_adjust:
filename = os.path.join(saved_model_root, "results_point_adjust.csv")
else:
filename = os.path.join(saved_model_root, "results.csv")
results_df.to_csv(filename, index=False)
logger.info("Saved results to {}".format(filename))
def analyse_from_pkls(results_root:str, thres_methods=["best_f1_test"], eval_root_cause=True, point_adjust=False,
eval_R_model=True, eval_dyn=True, thres_config=None, logger=None,
telem_only=True, filename_prefix="", rerun_if_ds=None, process_seeds=None, make_plots=[],
composite_best_f1=False):
"""
Function that reads saved predictions and evaluates them for anomaly detection and diagnosis under various
settings.
:param results_root: dir where predictions for the algo generated by the trainer in a specific folder structure.
:param thres_methods: list of thresholding methods with which to evaluate
:param eval_root_cause: Set it to True if root cause is desired and possible (i.e. if channel-wise scores are provided
in the predictions, else False
:param point_adjust: True if point-adjusted evaluation is desired.
:param eval_R_model: Corresponds to using Errors scoring function. Set it to True only if errors_t or errors_tc are
avaiable in the predictions. Pre-requisite to be True for eval_dyn.
:param eval_dyn: Set it to True if Gauss-D and Gauss-D-K scoring function evaluation is desired. Needs eval_R_model
to be True.
:param thres_config: A function that takes the dataset name as input and returns a dictionary corresponding to the
config for each method in thres_methods.
:param logger: for logging.
:param telem_only: Only affects the evaluation for MSL and SMAP datasets. If set to True, only the sensor channel,
i.e. first channel will be used in evaluation. If False, all channels - sensors and commands will be used.
:param filename_prefix: desired prefix on the filename.
:param process_seeds: specify a list if only some of the seeds need to be analyzed. If None, all seeds for which
results.csv doesn't exist will be (re)analyzed.
:param rerun_if_ds: set the names of specific datasets for which (re)analysis is required. Otherwise only datasets
for which results.csv doesn't exist will be (re)analyzed.
:param make_plots: specify which plots are desired. ["prc", "score_t"] are implemented.
:param composite_best_f1: if set to True, the "best-f1" threshold will be computed as "best-fc1" threshold.
:return: None. The evalution results are saved as results.csv for each run.
"""
seed = 42
result_df_list = []
ds_folders = os.listdir(results_root)
if point_adjust:
result_filename = "results_point_adjust.csv"
else:
result_filename = "results.csv"
if logger is None:
init_logging(os.path.join(results_root, 'logs'), prefix="eval")
logger = logging.getLogger(__name__)
for ds_folder in ds_folders:
if ds_folder.endswith(".csv") or ds_folder == "logs" or "thres_results" in ds_folder:
continue
ds_path = os.path.join(results_root, ds_folder)
algo_folders = os.listdir(ds_path)
for algo_folder in algo_folders:
algo_path = os.path.join(ds_path, algo_folder)
config_folders = os.listdir(algo_path)
config_folders = [folder for folder in config_folders if not folder.endswith(".csv")]
for config_folder in config_folders:
config_path = os.path.join(algo_path, config_folder)
run_folders = os.listdir(config_path)
for run_folder in run_folders:
if not run_folder.endswith(".json"):
run_path = os.path.join(config_path, run_folder)
current_seed = int(run_folder.split("-", 2)[0])
if process_seeds is not None:
if current_seed not in process_seeds:
continue
if rerun_if_ds is not None:
if (ds_folder in rerun_if_ds) or (rerun_if_ds == 'all'):
if os.path.exists(os.path.join(run_path, result_filename)):
os.remove(os.path.join(run_path, result_filename))
if not os.path.exists(os.path.join(run_path, result_filename)):
entity_folders = os.listdir(run_path)
skip_this_run = False # Flag to indicate that evaluation for this run is impossible
for entity_folder in entity_folders:
if entity_folder != "plots" and ".csv" not in entity_folder and entity_folder != "logs":
entity_path = os.path.join(run_path, entity_folder)
if not skip_this_run:
try:
with open(os.path.join(entity_path, "raw_predictions"), "rb") as file:
raw_predictions = pickle.load(file)
assert "score_t_test" in raw_predictions.keys()
if np.isnan(raw_predictions["score_t_test"]).any():
skip_this_run = True
except:
me_ds_name = ds_folder.split("_me")[0]
ds_class = get_dataset_class(me_ds_name)
algo_class = get_algo_class(algo_folder)
entity_name = entity_folder.replace("smap-", "").replace("msl-", "").\
replace("smd-", "")
ds_init_params = {"seed": seed, "entity": entity_name}
if me_ds_name == "swat-long":
ds_init_params["shorten_long"] = False
entity_ds = ds_class(**ds_init_params)
repredict = repredict_from_saved_model(entity_path, algo_class=algo_class,
entity=entity_ds, logger=logger)
if not repredict:
logger.warning("Predictions and trained model couldn't be found, evaluation is "
"impossible for run saved at %s" % run_path)
skip_this_run = True
if not skip_this_run:
evaluate(run_path, thres_methods=thres_methods, eval_root_cause=eval_root_cause,
point_adjust=point_adjust, eval_R_model=eval_R_model, eval_dyn=eval_dyn,
thres_config=thres_config, logger=logger, telem_only=telem_only,
make_plots=make_plots, composite_best_f1=composite_best_f1)
try:
result_df = pd.read_csv(os.path.join(run_path, result_filename))
if "point_adjust" not in result_df.columns:
result_df["point_adjust"] = False
result_df_list.append(result_df)
except:
logger.warning("Results table couldn't be found for run saved at %s" % run_path)
overall_results = pd.concat(result_df_list, ignore_index=True)
overall_results.to_csv(os.path.join(results_root, filename_prefix+"overall_" + result_filename))
def repredict_from_saved_model(model_root, algo_class, entity, logger):
algo_config_filename = os.path.join(model_root, "init_params")
saved_model_filename = [os.path.join(model_root, filename) for filename in
os.listdir(model_root) if "trained_model" in filename]
if len(saved_model_filename) == 1:
saved_model_filename = saved_model_filename[0]
else:
saved_model_filename.sort(key=get_chan_num)
additional_params_filename = os.path.join(model_root, "additional_params")
if len(additional_params_filename) == 1:
additional_params_filename = additional_params_filename[0]
try:
algo_reload = load_torch_algo(algo_class, algo_config_filename, saved_model_filename,
additional_params_filename, eval=True)
_ = Trainer.predict(algo_reload, entity, model_root, logger=logger)
return True
except Exception as e:
logger.warning(f"An error occurred while loading saved algo and repredicting: {e}")
return False
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/f1_score_f1_pa.py
================================================
import numpy as np
from sklearn.metrics import precision_recall_curve, roc_curve, auc, roc_auc_score, precision_score, recall_score, \
accuracy_score, fbeta_score, average_precision_score
# function: calculate the point-adjust f-scores(whether top k)
def get_point_adjust_scores(y_test, pred_labels, true_events, thereshold_k=0, whether_top_k=False):
tp = 0
fn = 0
for true_event in true_events.keys():
true_start, true_end = true_events[true_event]
if whether_top_k is False:
if pred_labels[true_start:true_end].sum() > 0:
tp += (true_end - true_start)
else:
fn += (true_end - true_start)
else:
if pred_labels[true_start:true_end].sum() > thereshold_k:
tp += (true_end - true_start)
else:
fn += (true_end - true_start)
fp = np.sum(pred_labels) - np.sum(pred_labels * y_test)
prec, rec, fscore = get_prec_rec_fscore(tp, fp, fn)
return fp, fn, tp, prec, rec, fscore
def get_adjust_F1PA(pred, gt):
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(gt, pred)
precision, recall, f_score, support = precision_recall_fscore_support(gt, pred,
average='binary')
return accuracy, precision, recall, f_score
# calculate the point-adjusted f-score
def get_prec_rec_fscore(tp, fp, fn):
if tp == 0:
precision = 0
recall = 0
else:
precision = tp / (tp + fp)
recall = tp / (tp + fn)
fscore = get_f_score(precision, recall)
return precision, recall, fscore
def get_f_score(prec, rec):
if prec == 0 and rec == 0:
f_score = 0
else:
f_score = 2 * (prec * rec) / (prec + rec)
return f_score
# function: calculate the normal edition f-scores
def get_accuracy_precision_recall_fscore(y_true: list, y_pred: list):
accuracy = accuracy_score(y_true, y_pred)
# warn_for=() avoids log warnings for any result being zero
# precision, recall, f_score, _ = prf(y_true, y_pred, average='binary', warn_for=())
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
f_score = (2 * precision * recall) / (precision + recall)
if precision == 0 and recall == 0:
f05_score = 0
else:
f05_score = fbeta_score(y_true, y_pred, average='binary', beta=0.5)
return accuracy, precision, recall, f_score, f05_score
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/f1_series.py
================================================
from fc_score import *
from f1_score_f1_pa import *
from evaluate_utils import *
default_thres_config = {"top_k_time": {},
"best_f1_test": {"exact_pt_adj": True},
"thresholded_score": {},
"tail_prob": {"tail_prob": 2},
"tail_prob_1": {"tail_prob": 1},
"tail_prob_2": {"tail_prob": 2},
"tail_prob_3": {"tail_prob": 3},
"tail_prob_4": {"tail_prob": 4},
"tail_prob_5": {"tail_prob": 5},
"dyn_gauss": {"long_window": 10000, "short_window": 1, "kernel_sigma": 10},
"nasa_npt": {"batch_size": 70, "window_size": 30, "telem_only": True,
"smoothing_perc": 0.005, "l_s": 250, "error_buffer": 5, "p": 0.05}}
def threshold_and_predict(score_t_test, y_test, true_events, logger, test_anom_frac, thres_method="top_k_time",
point_adjust=False, score_t_train=None, thres_config_dict=dict(), return_auc=False,
composite_best_f1=False):
if thres_method in thres_config_dict.keys():
config = thres_config_dict[thres_method]
else:
config = default_thres_config[thres_method]
# test_anom_frac = (np.sum(y_test)) / len(y_test)
auroc = None
avg_prec = None
if thres_method == "thresholded_score":
opt_thres = 0.5
if set(score_t_test) - {0, 1}:
logger.error("Score_t_test isn't binary. Predicting all as non-anomalous")
pred_labels = np.zeros(len(score_t_test))
else:
pred_labels = score_t_test
elif thres_method == "best_f1_test" and point_adjust:
prec, rec, thresholds = precision_recall_curve(y_test, score_t_test, pos_label=1)
if not config["exact_pt_adj"]:
fscore_best_time = [get_f_score(precision, recall) for precision, recall in zip(prec, rec)]
opt_num = np.squeeze(np.argmax(fscore_best_time))
opt_thres = thresholds[opt_num]
thresholds = np.random.choice(thresholds, size=5000) + [opt_thres]
fscores = []
for thres in thresholds:
_, _, _, _, _, fscore = get_point_adjust_scores(y_test, score_t_test > thres, true_events)
fscores.append(fscore)
opt_thres = thresholds[np.argmax(fscores)]
pred_labels = score_t_test > opt_thres
elif thres_method == "best_f1_test" and composite_best_f1:
prec, rec, thresholds = precision_recall_curve(y_test, score_t_test, pos_label=1)
precs_t = prec
fscores_c = [get_composite_fscore_from_scores(score_t_test, thres, true_events, prec_t) for thres, prec_t in
zip(thresholds, precs_t)]
try:
opt_thres = thresholds[np.nanargmax(fscores_c)]
except:
opt_thres = 0.0
pred_labels = score_t_test > opt_thres
elif thres_method == "top_k_time":
opt_thres = np.nanpercentile(score_t_test, 100 * (1 - test_anom_frac), interpolation='higher')
pred_labels = np.where(score_t_test > opt_thres, 1, 0)
elif thres_method == "best_f1_test":
prec, rec, thres = precision_recall_curve(y_test, score_t_test, pos_label=1)
fscore = [get_f_score(precision, recall) for precision, recall in zip(prec, rec)]
opt_num = np.squeeze(np.argmax(fscore))
opt_thres = thres[opt_num]
pred_labels = np.where(score_t_test > opt_thres, 1, 0)
elif "tail_prob" in thres_method:
tail_neg_log_prob = config["tail_prob"]
opt_thres = tail_neg_log_prob
pred_labels = np.where(score_t_test > opt_thres, 1, 0)
elif thres_method == "nasa_npt":
opt_thres = 0.5
pred_labels = get_npt_labels(score_t_test, y_test, config)
else:
logger.error("Thresholding method {} not in [top_k_time, best_f1_test, tail_prob]".format(thres_method))
return None, None
if return_auc:
avg_prec = average_precision_score(y_test, score_t_test)
auroc = roc_auc_score(y_test, score_t_test)
return opt_thres, pred_labels, avg_prec, auroc
return opt_thres, pred_labels
# most-top funcion
def evaluate_predicted_labels(pred_labels, y_test, true_events, logger, eval_method="time-wise", breaks=[],
point_adjust=False):
"""
Computes evaluation metrics for the binary classifications given the true and predicted labels
:param point_adjust: used to judge whether is pa
:param pred_labels: array of predicted labels
:param y_test: array of true labels
:param eval_method: string that indicates whether we evaluate the classification time point-wise or event-wise
:param breaks: array of discontinuities in the time series, relevant only if you look at event-wise
:param return_raw: Boolean that indicates whether we want to return tp, fp and fn or prec, recall and f1
:return: tuple of evaluation metrics
"""
if eval_method == "time-wise":
# point-adjust fscore
if point_adjust:
fp, fn, tp, prec, rec, fscore = get_point_adjust_scores(y_test, pred_labels, true_events)
# normal fscore
else:
_, prec, rec, fscore, _ = get_accuracy_precision_recall_fscore(y_test, pred_labels)
tp = np.sum(pred_labels * y_test)
fp = np.sum(pred_labels) - tp
fn = np.sum(y_test) - tp
# event-wise
else:
logger.error("Evaluation method {} not in [time-wise, event-wise]".format(eval_method))
return 0, 0, 0
return tp, fp, fn, prec, rec, fscore
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/fc_score.py
================================================
import numpy as np
from sklearn.metrics import precision_score
def get_events(y_test, outlier=1, normal=0):
events = dict()
label_prev = normal
event = 0 # corresponds to no event
event_start = 0
for tim, label in enumerate(y_test):
if label == outlier:
if label_prev == normal:
event += 1
event_start = tim
else:
if label_prev == outlier:
event_end = tim - 1
events[event] = (event_start, event_end)
label_prev = label
if label_prev == outlier:
event_end = tim - 1
events[event] = (event_start, event_end)
return events
def get_composite_fscore_raw(y_test, pred_labels, true_events, return_prec_rec=False):
tp = np.sum([pred_labels[start:end + 1].any() for start, end in true_events.values()])
fn = len(true_events) - tp
rec_e = tp / (tp + fn)
prec_t = precision_score(y_test, pred_labels)
fscore_c = 2 * rec_e * prec_t / (rec_e + prec_t)
if prec_t == 0 and rec_e == 0:
fscore_c = 0
if return_prec_rec:
return prec_t, rec_e, fscore_c
return fscore_c
def main():
y_test = np.zeros(100)
y_test[10:20] = 1
y_test[50:60] = 1
pred_labels = np.zeros(100)
pred_labels[15:17] = 1
pred_labels[55:62] = 1
# pred_labels[51:55] = 1
# true_events = get_events(y_test)
prec_t, rec_e, fscore_c = get_composite_fscore_raw(pred_labels, y_test, return_prec_rec=True)
# print("Prec_t: {}, rec_e: {}, fscore_c: {}".format(prec_t, rec_e, fscore_c))
if __name__ == "__main__":
main()
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/metrics.py
================================================
from other_anomaly_baselines.metrics.f1_score_f1_pa import *
from other_anomaly_baselines.metrics.fc_score import *
from other_anomaly_baselines.metrics.precision_at_k import *
from other_anomaly_baselines.metrics.customizable_f1_score import *
from other_anomaly_baselines.metrics.AUC import *
from other_anomaly_baselines.metrics.Matthews_correlation_coefficient import *
from other_anomaly_baselines.metrics.affiliation.generics import convert_vector_to_events
from other_anomaly_baselines.metrics.affiliation.metrics import pr_from_events
from other_anomaly_baselines.metrics.vus.models.feature import Window
from other_anomaly_baselines.metrics.vus.metrics import get_range_vus_roc
import numpy as np
def combine_all_evaluation_scores(y_test, pred_labels, anomaly_scores):
events_pred = convert_vector_to_events(y_test)
events_gt = convert_vector_to_events(pred_labels)
Trange = (0, len(y_test))
affiliation = pr_from_events(events_pred, events_gt, Trange)
true_events = get_events(y_test)
pa_accuracy, pa_precision, pa_recall, pa_f_score = get_adjust_F1PA(y_test, pred_labels)
MCC_score = MCC(y_test, pred_labels)
vus_results = get_range_vus_roc(y_test, pred_labels, 100) # default slidingWindow = 100
score_list_simple = {
"pa_accuracy":pa_accuracy,
"pa_precision":pa_precision,
"pa_recall":pa_recall,
"pa_f_score":pa_f_score,
"MCC_score":MCC_score,
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"]
}
# return score_list, score_list_simple
return score_list_simple
if __name__ == '__main__':
y_test = np.load("data/events_pred_MSL.npy")+0
pred_labels = np.load("data/events_gt_MSL.npy")+0
anomaly_scores = np.load("data/events_scores_MSL.npy")
print(len(y_test), max(anomaly_scores), min(anomaly_scores))
score_list_simple = combine_all_evaluation_scores(y_test, pred_labels, anomaly_scores)
for key, value in score_list_simple.items():
print('{0:21} :{1:10f}'.format(key, value))
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/precision_at_k.py
================================================
# k is defined as the number of anomalies
# only calculate the range top k not the whole set
import numpy as np
def precision_at_k(y_test, score_t_test, pred_labels):
# top-k
k = int(np.sum(y_test))
threshold = np.percentile(score_t_test, 100 * (1 - k / len(y_test)))
# precision_at_k = metrics.top_k_accuracy_score(label, score, k)
p_at_k = np.where(pred_labels > threshold)[0]
TP_at_k = sum(y_test[p_at_k])
precision_at_k = TP_at_k / k
return precision_at_k
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/vus/analysis/robustness_eval.py
================================================
from random import shuffle
import numpy as np
import math
import matplotlib.pyplot as plt
from matplotlib import cm
import pandas as pd
from tqdm import tqdm as tqdm
import time
from sklearn.preprocessing import MinMaxScaler
import random
import os
import sys
module_path = os.path.abspath(os.path.join('../..'))
if module_path not in sys.path:
sys.path.append(module_path)
from other_anomaly_baselines.metrics.vus.utils.slidingWindows import find_length
from other_anomaly_baselines.metrics.vus.utils.metrics import metricor
from other_anomaly_baselines.metrics.vus.models.distance import Fourier
from other_anomaly_baselines.metrics.vus.models.feature import Window
def generate_new_label(label,lag):
if lag < 0:
return np.array(list(label[-lag:]) + [0]*(-lag))
elif lag > 0:
return np.array([0]*lag + list(label[:-lag]))
elif lag == 0:
return label
def compute_anomaly_acc_lag(methods_scores,label,slidingWindow,methods_keys):
lag_range = list(range(-slidingWindow//4,slidingWindow//4,5))
methods_acc = {}
for i,methods_score in enumerate(tqdm(methods_keys)):
dict_acc = {
'R_AUC_ROC': [],
'AUC_ROC': [],
'R_AUC_PR': [],
'AUC_PR': [],
'VUS_ROC': [],
'VUS_PR': [],
'Precision': [],
'Recall': [],
'F': [],
'ExistenceReward':[],
'OverlapReward': [],
'Precision@k': [],
'Rprecision': [],
'Rrecall': [],
'RF': []}
for lag in tqdm(lag_range):
new_label = generate_new_label(label,lag)
grader = metricor()
R_AUC, R_AP, R_fpr, R_tpr, R_prec = grader.RangeAUC(labels=new_label, score=methods_scores[methods_score], window=slidingWindow, plot_ROC=True)
L, fpr, tpr= grader.metric_new(new_label, methods_scores[methods_score], plot_ROC=True)
precision, recall, AP = grader.metric_PR(new_label, methods_scores[methods_score])
Y, Z, X, X_ap, W, Z_ap,avg_auc_3d, avg_ap_3d = generate_curve(new_label,methods_scores[methods_score],2*slidingWindow)
L1 = [ elem for elem in L]
dict_acc['R_AUC_ROC'] +=[R_AUC]
dict_acc['AUC_ROC'] +=[L1[0]]
dict_acc['R_AUC_PR'] +=[R_AP]
dict_acc['AUC_PR'] +=[AP]
dict_acc['VUS_ROC'] +=[avg_auc_3d]
dict_acc['VUS_PR'] +=[avg_ap_3d]
dict_acc['Precision'] +=[L1[1]]
dict_acc['Recall'] +=[L1[2]]
dict_acc['F'] +=[L1[3]]
dict_acc['ExistenceReward']+=[L1[5]]
dict_acc['OverlapReward'] +=[L1[6]]
dict_acc['Precision@k'] +=[L1[9]]
dict_acc['Rprecision'] +=[L1[7]]
dict_acc['Rrecall'] +=[L1[4]]
dict_acc['RF'] +=[L1[8]]
methods_acc[methods_score] = dict_acc
return methods_acc
def compute_anomaly_acc_percentage(methods_scores,label,slidingWindow,methods_keys,pos_first_anom):
list_pos = []
step_a = max(0,(len(label) - pos_first_anom-200))//20
step_b = max(0,pos_first_anom-200)//20
pos_a = min(len(label),pos_first_anom + 200)
pos_b = max(0,pos_first_anom - 200)
list_pos.append((pos_b,pos_a))
for pos_iter in range(20):
pos_a = min(len(label),pos_a + step_a)
pos_b = max(0,pos_b - step_b)
list_pos.append((pos_b,pos_a))
methods_acc = {}
print(list_pos)
for i,methods_score in enumerate(tqdm(methods_keys)):
dict_acc = {
'R_AUC_ROC': [],
'AUC_ROC': [],
'R_AUC_PR': [],
'AUC_PR': [],
'VUS_ROC': [],
'VUS_PR': [],
'Precision': [],
'Recall': [],
'F': [],
'ExistenceReward':[],
'OverlapReward': [],
'Precision@k': [],
'Rprecision': [],
'Rrecall': [],
'RF': []}
for end_pos in tqdm(list_pos):
new_label = label[end_pos[0]:end_pos[1]]
new_score = np.array(methods_scores[methods_score])[end_pos[0]:end_pos[1]]
grader = metricor()
R_AUC, R_AP, R_fpr, R_tpr, R_prec = grader.RangeAUC(labels=new_label, score=new_score, window=slidingWindow, plot_ROC=True)
L, fpr, tpr= grader.metric_new(new_label, new_score, plot_ROC=True)
precision, recall, AP = grader.metric_PR(new_label, new_score)
Y, Z, X, X_ap, W, Z_ap,avg_auc_3d, avg_ap_3d = generate_curve(new_label,new_score,2*slidingWindow)
L1 = [ elem for elem in L]
dict_acc['R_AUC_ROC'] +=[R_AUC]
dict_acc['AUC_ROC'] +=[L1[0]]
dict_acc['R_AUC_PR'] +=[R_AP]
dict_acc['AUC_PR'] +=[AP]
dict_acc['VUS_ROC'] +=[avg_auc_3d]
dict_acc['VUS_PR'] +=[avg_ap_3d]
dict_acc['Precision'] +=[L1[1]]
dict_acc['Recall'] +=[L1[2]]
dict_acc['F'] +=[L1[3]]
dict_acc['ExistenceReward']+=[L1[5]]
dict_acc['OverlapReward'] +=[L1[6]]
dict_acc['Precision@k'] +=[L1[9]]
dict_acc['Rprecision'] +=[L1[7]]
dict_acc['Rrecall'] +=[L1[4]]
dict_acc['RF'] +=[L1[8]]
methods_acc[methods_score] = dict_acc
return methods_acc
def compute_anomaly_acc_noise(methods_scores,label,slidingWindow,methods_keys):
lag_range = list(range(-slidingWindow//2,slidingWindow//2,10))
methods_acc = {}
for i,methods_score in enumerate(tqdm(methods_keys)):
dict_acc = {
'R_AUC_ROC': [],
'AUC_ROC': [],
'R_AUC_PR': [],
'AUC_PR': [],
'VUS_ROC': [],
'VUS_PR': [],
'Precision': [],
'Recall': [],
'F': [],
'ExistenceReward':[],
'OverlapReward': [],
'Precision@k': [],
'Rprecision': [],
'Rrecall': [],
'RF': []}
for lag in tqdm(lag_range):
new_label = label
grader = metricor()
noise = np.random.normal(-0.1,0.1,len(methods_scores[methods_score]))
new_score = np.array(methods_scores[methods_score]) + noise
new_score = (new_score - min(new_score))/(max(new_score) - min(new_score))
R_AUC, R_AP, R_fpr, R_tpr, R_prec = grader.RangeAUC(labels=new_label, score=new_score, window=slidingWindow, plot_ROC=True)
L, fpr, tpr= grader.metric_new(new_label, new_score, plot_ROC=True)
precision, recall, AP = grader.metric_PR(new_label, new_score)
Y, Z, X, X_ap, W, Z_ap,avg_auc_3d, avg_ap_3d = generate_curve(new_label,new_score,2*slidingWindow)
L1 = [ elem for elem in L]
dict_acc['R_AUC_ROC'] +=[R_AUC]
dict_acc['AUC_ROC'] +=[L1[0]]
dict_acc['R_AUC_PR'] +=[R_AP]
dict_acc['AUC_PR'] +=[AP]
dict_acc['VUS_ROC'] +=[avg_auc_3d]
dict_acc['VUS_PR'] +=[avg_ap_3d]
dict_acc['Precision'] +=[L1[1]]
dict_acc['Recall'] +=[L1[2]]
dict_acc['F'] +=[L1[3]]
dict_acc['ExistenceReward']+=[L1[5]]
dict_acc['OverlapReward'] +=[L1[6]]
dict_acc['Precision@k'] +=[L1[9]]
dict_acc['Rprecision'] +=[L1[7]]
dict_acc['Rrecall'] +=[L1[4]]
dict_acc['RF'] +=[L1[8]]
methods_acc[methods_score] = dict_acc
return methods_acc
def compute_anomaly_acc_pairwise(methods_scores,label,slidingWindow,method1,method2):
lag_range = list(range(-slidingWindow//4,slidingWindow//4,5))
methods_acc = {}
method_key = [method1]
if method2 is not None:
method_key = [method1,method2]
for i,methods_score in enumerate(tqdm(method_key)):
dict_acc = {
'R_AUC_ROC': [],
'AUC_ROC': [],
'R_AUC_PR': [],
'AUC_PR': [],
'VUS_ROC': [],
'VUS_PR': [],
'Precision': [],
'Recall': [],
'F': [],
'ExistenceReward':[],
'OverlapReward': [],
'Precision@k': [],
'Rprecision': [],
'Rrecall': [],
'RF': []}
for lag in tqdm(range(60)):
new_lag = random.randint(-slidingWindow//4,slidingWindow//4)
new_label = generate_new_label(label,new_lag)
noise = np.random.normal(-0.1,0.1,len(methods_scores[methods_score]))
new_score = np.array(methods_scores[methods_score]) + noise
new_score = (new_score - min(new_score))/(max(new_score) - min(new_score))
grader = metricor()
R_AUC, R_AP, R_fpr, R_tpr, R_prec = grader.RangeAUC(labels=new_label, score=new_score, window=slidingWindow, plot_ROC=True)
L, fpr, tpr= grader.metric_new(new_label, new_score, plot_ROC=True)
precision, recall, AP = grader.metric_PR(new_label, new_score)
#range_anomaly = grader.range_convers_new(new_label)
Y, Z, X, X_ap, W, Z_ap,avg_auc_3d, avg_ap_3d = generate_curve(new_label,new_score,2*slidingWindow)
L1 = [ elem for elem in L]
dict_acc['R_AUC_ROC'] +=[R_AUC]
dict_acc['AUC_ROC'] +=[L1[0]]
dict_acc['R_AUC_PR'] +=[R_AP]
dict_acc['AUC_PR'] +=[AP]
dict_acc['VUS_ROC'] +=[avg_auc_3d]
dict_acc['VUS_PR'] +=[avg_ap_3d]
dict_acc['Precision'] +=[L1[1]]
dict_acc['Recall'] +=[L1[2]]
dict_acc['F'] +=[L1[3]]
dict_acc['ExistenceReward']+=[L1[5]]
dict_acc['OverlapReward'] +=[L1[6]]
dict_acc['Precision@k'] +=[L1[9]]
dict_acc['Rprecision'] +=[L1[7]]
dict_acc['Rrecall'] +=[L1[4]]
dict_acc['RF'] +=[L1[8]]
methods_acc[methods_score] = dict_acc
return methods_acc
def normalize_dict_exp(methods_acc_lag,methods_keys):
key_metrics = [
'VUS_ROC',
'VUS_PR',
'R_AUC_ROC',
'R_AUC_PR',
'AUC_ROC',
'AUC_PR',
'Rprecision',
'Rrecall',
'RF',
'Precision',
'Recall',
'F',
'Precision@k'
][::-1]
norm_methods_acc_lag = {}
for key in methods_keys:
norm_methods_acc_lag[key] = {}
for key_metric in key_metrics:
ts = methods_acc_lag[key][key_metric]
new_ts = list(np.array(ts) - np.mean(ts))
norm_methods_acc_lag[key][key_metric] = new_ts
return norm_methods_acc_lag
def group_dict(methods_acc_lag,methods_keys):
key_metrics = [
'VUS_ROC',
'VUS_PR',
'R_AUC_ROC',
'R_AUC_PR',
'AUC_ROC',
'AUC_PR',
'Rprecision',
'Rrecall',
'RF',
'Precision',
'Recall',
'F',
'Precision@k'
][::-1]
norm_methods_acc_lag = {key:[] for key in key_metrics}
for key in methods_keys:
for key_metric in key_metrics:
ts = list(methods_acc_lag[key][key_metric])
new_ts = list(np.array(ts) - np.mean(ts))
norm_methods_acc_lag[key_metric] += new_ts
return norm_methods_acc_lag
def generate_curve(label,score,slidingWindow):
tpr_3d, fpr_3d, prec_3d, window_3d, avg_auc_3d, avg_ap_3d = metricor().RangeAUC_volume(labels_original=label, score=score, windowSize=1*slidingWindow)
X = np.array(tpr_3d).reshape(1,-1).ravel()
X_ap = np.array(tpr_3d)[:,:-1].reshape(1,-1).ravel()
Y = np.array(fpr_3d).reshape(1,-1).ravel()
W = np.array(prec_3d).reshape(1,-1).ravel()
Z = np.repeat(window_3d, len(tpr_3d[0]))
Z_ap = np.repeat(window_3d, len(tpr_3d[0])-1)
return Y, Z, X, X_ap, W, Z_ap,avg_auc_3d, avg_ap_3d
def box_plot(data, edge_color, fill_color):
bp = ax.boxplot(data, patch_artist=True)
for element in ['boxes', 'whiskers', 'fliers', 'means', 'medians', 'caps']:
plt.setp(bp[element], color=edge_color)
for patch in bp['boxes']:
patch.set(facecolor=fill_color)
return bp
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/vus/analysis/score_computation.py
================================================
import numpy as np
import math
import matplotlib.pyplot as plt
from matplotlib import cm
import pandas as pd
from tqdm import tqdm as tqdm
import time
from sklearn.preprocessing import MinMaxScaler
import random
import os
import sys
module_path = os.path.abspath(os.path.join('../..'))
if module_path not in sys.path:
sys.path.append(module_path)
from metrics.vus.utils.slidingWindows import find_length
from metrics.vus.utils.metrics import metricor
from metrics.vus.models.distance import Fourier
from metrics.vus.models.feature import Window
from metrics.vus.models.cnn import cnn
from metrics.vus.models.AE_mlp2 import AE_MLP2
from metrics.vus.models.lstm import lstm
from metrics.vus.models.ocsvm import OCSVM
from metrics.vus.models.poly import POLY
from metrics.vus.models.pca import PCA
from metrics.vus.models.norma import NORMA
from metrics.vus.models.matrix_profile import MatrixProfile
from metrics.vus.models.lof import LOF
from metrics.vus.models.iforest import IForest
def find_section_length(label,length):
best_i = None
best_sum = None
current_subseq = False
for i in range(len(label)):
changed = False
if label[i] == 1:
if current_subseq == False:
current_subseq = True
if best_i is None:
changed = True
best_i = i
best_sum = np.sum(label[max(0,i-200):min(len(label),i+9800)])
else:
if np.sum(label[max(0,i-200):min(len(label),i+9800)]) < best_sum:
changed = True
best_i = i
best_sum = np.sum(label[max(0,i-200):min(len(label),i+9800)])
else:
changed = False
if changed:
diff = i+9800 - len(label)
pos1 = max(0,i-200 - max(0,diff))
pos2 = min(i+9800,len(label))
else:
current_subseq = False
if best_i is not None:
return best_i-pos1,(pos1,pos2)
else:
return None,None
def generate_data(filepath,init_pos,max_length):
df = pd.read_csv(filepath, header=None).to_numpy()
name = filepath.split('/')[-1]
#max_length = 30000
data = df[init_pos:init_pos+max_length,0].astype(float)
label = df[init_pos:init_pos+max_length,1]
pos_first_anom,pos = find_section_length(label,max_length)
data = df[pos[0]:pos[1],0].astype(float)
label = df[pos[0]:pos[1],1]
slidingWindow = find_length(data)
#slidingWindow = 70
X_data = Window(window = slidingWindow).convert(data).to_numpy()
data_train = data[:int(0.1*len(data))]
data_test = data
X_train = Window(window = slidingWindow).convert(data_train).to_numpy()
X_test = Window(window = slidingWindow).convert(data_test).to_numpy()
return pos_first_anom,slidingWindow,data,X_data,data_train,data_test,X_train,X_test,label
def compute_score(methods,slidingWindow,data,X_data,data_train,data_test,X_train,X_test):
methods_scores = {}
for method in methods:
start_time = time.time()
if method == 'IForest':
clf = IForest(n_jobs=1)
x = X_data
clf.fit(x)
score = clf.decision_scores_
score = MinMaxScaler(feature_range=(0,1)).fit_transform(score.reshape(-1,1)).ravel()
score = np.array([score[0]]*math.ceil((slidingWindow-1)/2) + list(score) + [score[-1]]*((slidingWindow-1)//2))
elif method == 'LOF':
clf = LOF(n_neighbors=20, n_jobs=1)
x = X_data
clf.fit(x)
score = clf.decision_scores_
score = MinMaxScaler(feature_range=(0,1)).fit_transform(score.reshape(-1,1)).ravel()
score = np.array([score[0]]*math.ceil((slidingWindow-1)/2) + list(score) + [score[-1]]*((slidingWindow-1)//2))
elif method == 'MatrixProfile':
clf = MatrixProfile(window = slidingWindow)
x = data
clf.fit(x)
score = clf.decision_scores_
score = MinMaxScaler(feature_range=(0,1)).fit_transform(score.reshape(-1,1)).ravel()
score = np.array([score[0]]*math.ceil((slidingWindow-1)/2) + list(score) + [score[-1]]*((slidingWindow-1)//2))
elif method == 'NormA':
clf = NORMA(pattern_length = slidingWindow, nm_size=3*slidingWindow)
x = data
clf.fit(x)
score = clf.decision_scores_
score = MinMaxScaler(feature_range=(0,1)).fit_transform(score.reshape(-1,1)).ravel()
score = np.array([score[0]]*((slidingWindow-1)//2) + list(score) + [score[-1]]*((slidingWindow-1)//2))
elif method == 'PCA':
clf = PCA()
x = X_data
clf.fit(x)
score = clf.decision_scores_
score = MinMaxScaler(feature_range=(0,1)).fit_transform(score.reshape(-1,1)).ravel()
score = np.array([score[0]]*math.ceil((slidingWindow-1)/2) + list(score) + [score[-1]]*((slidingWindow-1)//2))
elif method == 'POLY':
clf = POLY(power=3, window = slidingWindow)
x = data
clf.fit(x)
measure = Fourier()
measure.detector = clf
measure.set_param()
clf.decision_function(measure=measure)
score = clf.decision_scores_
score = MinMaxScaler(feature_range=(0,1)).fit_transform(score.reshape(-1,1)).ravel()
elif method == 'OCSVM':
X_train_ = MinMaxScaler(feature_range=(0,1)).fit_transform(X_train.T).T
X_test_ = MinMaxScaler(feature_range=(0,1)).fit_transform(X_test.T).T
clf = OCSVM(nu=0.05)
clf.fit(X_train_, X_test_)
score = clf.decision_scores_
score = np.array([score[0]]*math.ceil((slidingWindow-1)/2) + list(score) + [score[-1]]*((slidingWindow-1)//2))
score = MinMaxScaler(feature_range=(0,1)).fit_transform(score.reshape(-1,1)).ravel()
elif method == 'LSTM':
clf = lstm(slidingwindow = slidingWindow, predict_time_steps=1, epochs = 50, patience = 5, verbose=0)
clf.fit(data_train, data_test)
measure = Fourier()
measure.detector = clf
measure.set_param()
clf.decision_function(measure=measure)
score = clf.decision_scores_
score = MinMaxScaler(feature_range=(0,1)).fit_transform(score.reshape(-1,1)).ravel()
elif method == 'AE':
clf = AE_MLP2(slidingWindow = slidingWindow, epochs=100, verbose=0)
clf.fit(data_train, data_test)
score = clf.decision_scores_
score = MinMaxScaler(feature_range=(0,1)).fit_transform(score.reshape(-1,1)).ravel()
elif method == 'CNN':
clf = cnn(slidingwindow = slidingWindow, predict_time_steps=1, epochs = 100, patience = 5, verbose=0)
clf.fit(data_train, data_test)
measure = Fourier()
measure.detector = clf
measure.set_param()
clf.decision_function(measure=measure)
score = clf.decision_scores_
score = MinMaxScaler(feature_range=(0,1)).fit_transform(score.reshape(-1,1)).ravel()
#end_time = time.time()
#time_exec = end_time - start_time
#print(method,"\t time: {}".format(time_exec))
methods_scores[method] = score
return methods_scores
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/vus/metrics.py
================================================
from .utils.metrics import metricor
from .analysis.robustness_eval import generate_curve
def get_range_vus_roc(score, labels, slidingWindow):
grader = metricor()
R_AUC_ROC, R_AUC_PR, _, _, _ = grader.RangeAUC(labels=labels, score=score, window=slidingWindow, plot_ROC=True)
_, _, _, _, _, _,VUS_ROC, VUS_PR = generate_curve(labels, score, 2*slidingWindow)
metrics = {'R_AUC_ROC': R_AUC_ROC, 'R_AUC_PR': R_AUC_PR, 'VUS_ROC': VUS_ROC, 'VUS_PR': VUS_PR}
return metrics
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/vus/models/distance.py
================================================
# -*- coding: utf-8 -*-
"""Classes of distance measure for model type A
"""
import numpy as np
# import matplotlib.pyplot as plt
# import random
from arch import arch_model
# import pandas as pd
import math
# import pmdarima as pm
# from pmdarima import model_selection
# import os
# import dis
# import statistics
# from sklearn import metrics
# import sklearn
class Euclidean:
""" The function class for Lp euclidean norm
----------
Power : int, optional (default=1)
The power of the lp norm. For power = k, the measure is calculagted by |x - y|_k
neighborhood : int, optional (default=max (100, 10*window size))
The length of neighborhood to derivete the normalizing constant D which is based on
the difference of maximum and minimum in the neighborhood minus window.
window: int, optional (default = length of input data)
The length of the subsequence to be compaired
Attributes
----------
decision_scores_ : numpy array of shape (n_samples,)
The outlier scores of the training data.
The higher, the more abnormal. Outliers tend to have higher
scores. This value is available once the detector is
fitted.
detector: Object classifier
the anomaly detector that is used
"""
def __init__(self, power = 1, neighborhood = 100, window = 20, norm = False):
self.power = power
self.window = window
self.neighborhood = neighborhood
self.detector = None
self.decision_scores_ = []
self.norm = norm
self.X_train = 2
def measure(self, X, Y, index):
"""Derive the decision score based on the given distance measure
Parameters
----------
X : numpy array of shape (n_samples, )
The real input samples subsequence.
Y : numpy array of shape (n_samples, )
The estimated input samples subsequence.
Index : int
the index of the starting point in the subsequence
Returns
-------
score : float
dissimiarity score between the two subsquence
"""
X_train = self.X_train
X_train = self.detector.X_train_
power = self.power
window = self.window
neighborhood = self.neighborhood
norm = self.norm
data = X_train
if norm == False:
if X.shape[0] == 0:
score = 0
else:
score = np.linalg.norm(X-Y, power)/(X.shape[0])
self.decision_scores_.append((index, score))
return score
elif type(X_train) == int:
print('Error! Detector is not fed to the object and X_train is not known')
elif neighborhood != 'all':
length = X.shape[0]
neighbor = int(self.neighborhood/2)
if index + neighbor < self.n_train_ and index - neighbor > 0:
region = np.concatenate((data[index - neighbor: index], data[index + window: index + neighbor] ))
D = np.max(region) - np.min(region)
elif index + neighbor >= self.n_train_ and index + window < self.n_train_:
region = np.concatenate((data[self.n_train_ - neighborhood: index], data[index + window: self.n_train_] ))
D = np.max(region) - np.min(region)
elif index + window >= self.n_train_:
region = data[self.n_train_ - neighborhood: index]
D = np.max(region) - np.min(region)
else:
region = np.concatenate((data[0: index], data[index + window: index + neighborhood] ))
D = np.max(region) - np.min(region)
score = np.linalg.norm(X-Y, power)/D/(X.shape[0]**power)
self.decision_scores_.append((index, score))
return score
def set_param(self):
if self.detector != None:
self.window = self.detector.window
self.neighborhood = self.detector.neighborhood
self.n_train_ = self.detector.n_train_
self.X_train = self.detector.X_train_
else:
print('Error! Detector is not fed to the object and X_train is not known')
return self
class Mahalanobis:
""" The function class for Mahalanobis measure
----------
Probability : boolean, optional (default=False)
Whether to derive the anomoly score by the probability that such point occurs
neighborhood : int, optional (default=max (100, 10*window size))
The length of neighborhood to derivete the normalizing constant D which is based on
the difference of maximum and minimum in the neighborhood minus window.
Attributes
----------
decision_scores_ : numpy array of shape (n_samples,)
The outlier scores of the training data.
The higher, the more abnormal. Outliers tend to have higher
scores. This value is available once the detector is
fitted.
detector: Object classifier
the anomaly detector that is used
"""
def __init__(self, probability = False):
self.probability = probability
self.detector = None
self.decision_scores_ = []
self.mu = 0
def set_param(self):
'''update the parameters with the detector that is used
'''
self.n_initial_ = self.detector.n_initial_
self.estimation = self.detector.estimation
self.X_train = self.detector.X_train_
self.window = self.detector.window
window = self.window
resid = self.X_train - self.estimation
number = max(100, self.window)
self.residual = np.zeros((window, number))
for i in range(number):
self.residual[:, i] = resid[self.n_initial_+i:self.n_initial_+i+window]
self.mu = np.zeros(number)
self.cov = np.cov(self.residual, rowvar=1)
if self.window == 1:
self.cov = (np.sum(np.square(self.residual))/(number - 1))**0.5
return self
def norm_pdf_multivariate(self, x):
'''multivarite normal density function
'''
try:
mu = self.mu
except:
mu = np.zeros(x.shape[0])
sigma = self.cov
size = x.shape[0]
if size == len(mu) and (size, size) == sigma.shape:
det = np.linalg.det(sigma)
if det == 0:
raise NameError("The covariance matrix can't be singular")
norm_const = 1.0/ ( math.pow((2*math.pi),float(size)/2) * math.pow(det,1.0/2) )
x_mu = np.matrix(x - mu)
inv = np.linalg.inv(sigma)
result = math.pow(math.e, -0.5 * (x_mu * inv * x_mu.T))
return norm_const * result
else:
raise NameError("The dimensions of the input don't match")
def normpdf(self, x):
'''univariate normal
'''
mean = 0
sd = np.asscalar(self.cov)
var = float(sd)**2
denom = (2*math.pi*var)**.5
num = math.exp(-(float(x)-float(mean))**2/(2*var))
return num/denom
def measure(self, X, Y, index):
"""Derive the decision score based on the given distance measure
Parameters
----------
X : numpy array of shape (n_samples, )
The real input samples subsequence.
Y : numpy array of shape (n_samples, )
The estimated input samples subsequence.
Index : int
the index of the starting point in the subsequence
Returns
-------
score : float
dissimiarity score between the two subsquence
"""
mu = np.zeros(self.detector.window)
cov = self.cov
if self.probability == False:
if X.shape[0] == mu.shape[0]:
score = np.matmul(np.matmul((X-Y-mu).T, cov), (X-Y-mu))/(X.shape[0])
self.decision_scores_.append((index, score))
return score
else:
return (X-Y).T.dot(X-Y)
else:
if len(X) > 1:
prob = self.norm_pdf_multivariate(X-Y)
elif len(X) == 1:
X = np.asscalar(X)
Y = np.asscalar(Y)
prob = self.normpdf(X-Y)
else:
prob = 1
score = 1 - prob
score = max(score, 0)
self.decision_scores_.append((index, score))
return score
class Garch:
""" The function class for garch measure
----------
p, q : int, optional (default=1, 1)
The order of the garch model to be fitted on the residual
mean : string, optional (default='zero' )
The forecast conditional mean.
vol: string, optional (default = 'garch')
he forecast conditional variance.
Attributes
----------
decision_scores_ : numpy array of shape (n_samples,)
The outlier scores of the training data.
The higher, the more abnormal. Outliers tend to have higher
scores. This value is available once the detector is
fitted.
detector: Object classifier
the anomaly detector that is used
"""
def __init__(self, p = 1, q = 1, mean = 'zero', vol = 'garch'):
self.p = p
self.q = q
self.vol = vol
self.mean = mean
self.decision_scores_ = []
def set_param(self):
'''update the parameters with the detector that is used
'''
q = self.q
p=self.p
mean = self.mean
vol = self.vol
if self.detector != None:
self.n_initial_ = self.detector.n_initial_
self.estimation = self.detector.estimation
self.X_train = self.detector.X_train_
self.window = self.detector.window
window = self.window
resid = 10 * (self.X_train - self.estimation)
model = arch_model(resid, mean=mean, vol=vol, p=p, q=q)
model_fit = model.fit(disp='off')
self.votility = model_fit.conditional_volatility/10
else:
print('Error! Detector not fed to the measure')
return self
def measure(self, X, Y, index):
"""Derive the decision score based on the given distance measure
Parameters
----------
X : numpy array of shape (n_samples, )
The real input samples subsequence.
Y : numpy array of shape (n_samples, )
The estimated input samples subsequence.
Index : int
the index of the starting point in the subsequence
Returns
-------
score : float
dissimiarity score between the two subsquences
"""
X = np.array(X)
Y = np.array(Y)
length = len(X)
score = 0
if length != 0:
for i in range(length):
sigma = self.votility[index + i]
if sigma != 0:
score += abs(X[i]-Y[i])/sigma
score = score/length
return score
class SSA_DISTANCE:
""" The function class for SSA measure
good for contextual anomolies
----------
method : string, optional (default='linear' )
The method to fit the line and derives the SSA score
e: float, optional (default = 1)
The upper bound to start new line search for linear method
Attributes
----------
decision_scores_ : numpy array of shape (n_samples,)
The outlier scores of the training data.
The higher, the more abnormal. Outliers tend to have higher
scores. This value is available once the detector is
fitted.
detector: Object classifier
the anomaly detector that is used
"""
def __init__(self, method ='linear', e = 1):
self.method = method
self.decision_scores_ = []
self.e = e
def Linearization(self, X2):
"""Obtain the linearized curve.
Parameters
----------
X2 : numpy array of shape (n, )
the time series curve to be fitted
e: float, integer, or numpy array
weights to obtain the
Returns
-------
fit: parameters for the fitted linear curve
"""
e = self.e
i = 0
fit = {}
fit['index'] = []
fit['rep'] = []
while i < len(X2):
fit['index'].append(i)
try:
fit['Y'+str(i)]= X2[i]
except:
print(X2.shape, X2)
fit['rep'].append(np.array([i, X2[i]]))
if i+1 >= len(X2):
break
k = X2[i+1]-X2[i]
b = -i*(X2[i+1]-X2[i])+X2[i]
fit['reg' +str(i)]= np.array([k, b])
i += 2
if i >= len(X2):
break
d = np.abs(X2[i]- (k * i+b))
while d < e:
i +=1
if i >= len(X2):
break
d = np.abs(X2[i]- (k * i+b))
return fit
def set_param(self):
'''update the parameters with the detector that is used.
Since the SSA measure doens't need the attributes of detector
or characteristics of X_train, the process is omitted.
'''
return self
def measure(self, X2, X3, start_index):
"""Obtain the SSA similarity score.
Parameters
----------
X2 : numpy array of shape (n, )
the reference timeseries
X3 : numpy array of shape (n, )
the tested timeseries
e: float, integer, or numpy array
weights to obtain the
Returns
-------
score: float, the higher the more dissimilar are the two curves
"""
#linearization of data X2 and X3
X2 = np.array(X2)
X3 = np.array(X3)
e = self.e
fit = self.Linearization(X2)
fit2 = self.Linearization(X3)
#line alinement
Index = []
test_list = fit['index'] + fit2['index']
[Index.append(x) for x in test_list if x not in Index]
Y = 0
#Similarity Computation
for i in Index:
if i in fit['index'] and i in fit2['index']:
Y += abs(fit['Y'+str(i)]-fit2['Y'+str(i)])
elif i in fit['index']:
J = np.max(np.where(np.array(fit2['index']) < i ))
index = fit2['index'][J]
k = fit2['reg'+str(index)][0]
b = fit2['reg'+str(index)][1]
value = abs(k * i + b - fit['Y'+str(i)])
Y += value
elif i in fit2['index']:
J = np.max(np.where(np.array(fit['index']) < i ))
index = fit['index'][J]
k = fit['reg'+str(index)][0]
b = fit['reg'+str(index)][1]
value = abs(k * i + b - fit2['Y'+str(i)])
Y += value
if len(Index) != 0:
score = Y/len(Index)
else:
score = 0
self.decision_scores_.append((start_index, score))
if len(X2) == 1:
print('Error! SSA measure doesn\'t apply to singleton' )
else:
return score
class Fourier:
""" The function class for Fourier measure
good for contextual anomolies
----------
power: int, optional (default = 2)
Lp norm for dissimiarlity measure considered
Attributes
----------
decision_scores_ : numpy array of shape (n_samples,)
The outlier scores of the training data.
The higher, the more abnormal. Outliers tend to have higher
scores. This value is available once the detector is
fitted.
detector: Object classifier
the anomaly detector that is used
"""
def __init__(self, power = 2):
self.decision_scores_ = []
self.power = power
def set_param(self):
'''update the parameters with the detector that is used
since the FFT measure doens't need the attributes of detector
or characteristics of X_train, the process is omitted.
'''
return self
def measure(self, X2, X3, start_index):
"""Obtain the SSA similarity score.
Parameters
----------
X2 : numpy array of shape (n, )
the reference timeseries
X3 : numpy array of shape (n, )
the tested timeseries
index: int,
current index for the subseqeuence that is being measured
Returns
-------
score: float, the higher the more dissimilar are the two curves
"""
power = self.power
X2 = np.array(X2)
X3 = np.array(X3)
if len(X2) == 0:
score = 0
else:
X2 = np.fft.fft(X2);
X3 = np.fft.fft(X3)
score = np.linalg.norm(X2 - X3, ord = power)/len(X3)
self.decision_scores_.append((start_index, score))
return score
class DTW:
""" The function class for dynamic time warping measure
----------
method : string, optional (default='L2' )
The distance measure to derive DTW.
Avaliable "L2", "L1", and custom
Attributes
----------
decision_scores_ : numpy array of shape (n_samples,)
The outlier scores of the training data.
The higher, the more abnormal. Outliers tend to have higher
scores. This value is available once the detector is
fitted.
detector: Object classifier
the anomaly detector that is used
"""
def __init__(self, method = 'L2'):
self.decision_scores_ = []
if type(method) == str:
if method == 'L1':
distance = lambda x, y: abs(x-y)
elif method == 'L2':
distance = lambda x, y: (x-y)**2
else:
distance = method
self.distance = distance
def set_param(self):
'''update the parameters with the detector that is used
since the FFT measure doens't need the attributes of detector
or characteristics of X_train, the process is omitted.
'''
return self
def measure(self, X1, X2, start_index):
"""Obtain the SSA similarity score.
Parameters
----------
X1 : numpy array of shape (n, )
the reference timeseries
X2 : numpy array of shape (n, )
the tested timeseries
index: int,
current index for the subseqeuence that is being measured
Returns
-------
score: float, the higher the more dissimilar are the two curves
"""
distance = self.distance
X1 = np.array(X1)
X2 = np.array(X2)
value = 1
if len(X1)==0:
value =0
X1= np.zeros(5)
X2 = X1
M = np.zeros((len(X1), len(X2)))
for index_i in range(len(X1)):
for index_j in range(len(X1) - index_i):
L = []
i = index_i
j = index_i + index_j
D = distance(X1[i], X2[j])
try:
L.append(M[i-1, j-1])
except:
L.append(np.inf)
try:
L.append(M[i, j-1])
except:
L.append(np.inf)
try:
L.append(M[i-1, j])
except:
L.append(np.inf)
D += min(L)
M[i,j] = D
if i !=j:
L = []
j = index_i
i = index_i + index_j
D = distance(X1[i], X2[j])
try:
L.append(M[i-1, j-1])
except:
L.append(np.inf)
try:
L.append(M[i, j-1])
except:
L.append(np.inf)
try:
L.append(M[i-1, j])
except:
L.append(np.inf)
D += min(L)
M[i,j] = D
score = M[len(X1)-1, len(X1)-1]/len(X1)
if value == 0:
score = 0
self.decision_scores_.append((start_index, score))
return score
class EDRS:
""" The function class for edit distance on real sequences
----------
method : string, optional (default='L2' )
The distance measure to derive DTW.
Avaliable "L2", "L1", and custom
ep: float, optiona (default = 0.1)
the threshold value to decide Di_j
vot : boolean, optional (default = False)
whether to adapt a chaging votilities estimaed by garch
for ep at different windows.
Attributes
----------
decision_scores_ : numpy array of shape (n_samples,)
The outlier scores of the training data.
The higher, the more abnormal. Outliers tend to have higher
scores. This value is available once the detector is
fitted.
detector: Object classifier
the anomaly detector that is used
"""
def __init__(self, method = 'L1', ep = False, vol = False):
self.decision_scores_ = []
if type(method) == str:
if method == 'L1':
distance = lambda x, y: abs(x-y)
else:
distance = method
self.distance = distance
self.ep = ep
self.vot = vol
def set_param(self):
'''update the ep based on the votalitiy of the model
'''
estimation = np.array(self.detector.estimation )
initial = self.detector.n_initial_
X = np.array(self.detector.X_train_)
self.initial = initial
residual = estimation[initial:] - X[initial:]
number = len(residual)
#var = (np.sum(np.square(residual))/(number - 1))**0.5
vot = self.vot
if vot == False:
var = np.var(residual)
else:
model = arch_model(10 * residual, mean='Constant', vol='garch', p=1, q=1)
model_fit = model.fit(disp='off')
var = model_fit.conditional_volatility/10
if self.ep == False:
self.ep = 3 * (np.sum(np.square(residual))/(len(residual) - 1))**0.5
else:
self.ep = self.ep
return self
def measure(self, X1, X2, start_index):
"""Obtain the SSA similarity score.
Parameters
----------
X1 : numpy array of shape (n, )
the reference timeseries
X2 : numpy array of shape (n, )
the tested timeseries
index: int,
current index for the subseqeuence that is being measured
Returns
-------
score: float, the higher the more dissimilar are the two curves
"""
distance = self.distance
X1 = np.array(X1)
X2 = np.array(X2)
vot = self.vot
if vot == False:
ep = self.ep
else:
try:
ep = self.ep[start_index - self.initial]
except:
#sometime start_index is the length of the number
ep = 0
value = 1
if len(X1)==0:
value =0
X1= np.zeros(5)
X2 = X1
M = np.zeros((len(X1), len(X2)))
M[:, 0] = np.arange(len(X1))
M[0, :] = np.arange(len(X1))
for index_i in range(1, len(X1)):
for index_j in range(len(X1) - index_i):
L = []
i = index_i
j = index_i + index_j
D = distance(X1[i], X2[j])
if D < ep:
M[i, j]= M[i-1, j-1]
else:
try:
L.append(M[i-1, j-1])
except:
L.append(np.inf)
try:
L.append(M[i, j-1])
except:
L.append(np.inf)
try:
L.append(M[i-1, j])
except:
L.append(np.inf)
M[i,j] = 1 + min(L)
if i !=j:
L = []
j = index_i
i = index_i + index_j
D = distance(X1[i], X2[j])
if D < ep:
M[i, j]= M[i-1, j-1]
else:
try:
L.append(M[i-1, j-1])
except:
L.append(np.inf)
try:
L.append(M[i, j-1])
except:
L.append(np.inf)
try:
L.append(M[i-1, j])
except:
L.append(np.inf)
M[i,j] = 1 + min(L)
score = M[len(X1)-1, len(X1)-1]/len(X1)
if value == 0:
score = 0
self.decision_scores_.append((start_index, score))
return score
class TWED:
""" Function class for Time-warped edit distance(TWED) measure
----------
method : string, optional (default='L2' )
The distance measure to derive DTW.
Avaliable "L2", "L1", and custom
gamma: float, optiona (default = 0.1)
mismatch penalty
v : float, optional (default = False)
stifness parameter
Attributes
----------
decision_scores_ : numpy array of shape (n_samples,)
The outlier scores of the training data.
The higher, the more abnormal. Outliers tend to have higher
scores. This value is available once the detector is
fitted.
detector: Object classifier
the anomaly detector that is used
"""
def __init__(self, gamma = 0.1, v = 0.1):
self.decision_scores_ = []
self.gamma = gamma
self.v = v
def set_param(self):
'''No need'''
return self
def measure(self, A, B, start_index):
"""Obtain the SSA similarity score.
Parameters
----------
X1 : numpy array of shape (n, )
the reference timeseries
X2 : numpy array of shape (n, )
the tested timeseries
index: int,
current index for the subseqeuence that is being measured
Returns
-------
score: float, the higher the more dissimilar are the two curves
"""
#code modifed from wikipedia
Dlp = lambda x,y: abs(x-y)
timeSB = np.arange(1,len(B)+1)
timeSA = np.arange(1,len(A)+1)
nu = self.v
_lambda = self.gamma
# Reference :
# Marteau, P.; F. (2009). "Time Warp Edit Distance with Stiffness Adjustment for Time Series Matching".
# IEEE Transactions on Pattern Analysis and Machine Intelligence. 31 (2): 306–318. arXiv:cs/0703033
# http://people.irisa.fr/Pierre-Francois.Marteau/
# Check if input arguments
if len(A) != len(timeSA):
print("The length of A is not equal length of timeSA")
return None, None
if len(B) != len(timeSB):
print("The length of B is not equal length of timeSB")
return None, None
if nu < 0:
print("nu is negative")
return None, None
# Add padding
A = np.array([0] + list(A))
timeSA = np.array([0] + list(timeSA))
B = np.array([0] + list(B))
timeSB = np.array([0] + list(timeSB))
n = len(A)
m = len(B)
# Dynamical programming
DP = np.zeros((n, m))
# Initialize DP Matrix and set first row and column to infinity
DP[0, :] = np.inf
DP[:, 0] = np.inf
DP[0, 0] = 0
# Compute minimal cost
for i in range(1, n):
for j in range(1, m):
# Calculate and save cost of various operations
C = np.ones((3, 1)) * np.inf
# Deletion in A
C[0] = (
DP[i - 1, j]
+ Dlp(A[i - 1], A[i])
+ nu * (timeSA[i] - timeSA[i - 1])
+ _lambda
)
# Deletion in B
C[1] = (
DP[i, j - 1]
+ Dlp(B[j - 1], B[j])
+ nu * (timeSB[j] - timeSB[j - 1])
+ _lambda
)
# Keep data points in both time series
C[2] = (
DP[i - 1, j - 1]
+ Dlp(A[i], B[j])
+ Dlp(A[i - 1], B[j - 1])
+ nu * (abs(timeSA[i] - timeSB[j]) + abs(timeSA[i - 1] - timeSB[j - 1]))
)
# Choose the operation with the minimal cost and update DP Matrix
DP[i, j] = np.min(C)
distance = DP[n - 1, m - 1]
self.M = DP
self.decision_scores_.append((start_index, distance))
return distance
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/vus/models/feature.py
================================================
# -*- coding: utf-8 -*-
"""Classes of feature mapping for model type B
"""
import numpy as np
# import matplotlib.pyplot as plt
# import random
# from arch import arch_model
import pandas as pd
import math
# import pmdarima as pm
# from pmdarima import model_selection
# import os
# import dis
# import statistics
# from sklearn import metrics
# import sklearn
from tsfresh import extract_features
from statsmodels.tsa.seasonal import seasonal_decompose
# import itertools
# import functools
import warnings
from builtins import range
# from collections import defaultdict
from numpy.linalg import LinAlgError
# from scipy.signal import cwt, find_peaks_cwt, ricker, welch
# from scipy.stats import linregress
# from statsmodels.tools.sm_exceptions import MissingDataError
with warnings.catch_warnings():
# Ignore warnings of the patsy package
warnings.simplefilter("ignore", DeprecationWarning)
from statsmodels.tsa.ar_model import AR
# from statsmodels.tsa.stattools import acf, adfuller, pacf
from hurst import compute_Hc
class Window:
""" The class for rolling window feature mapping.
The mapping converts the original timeseries X into a matrix.
The matrix consists of rows of sliding windows of original X.
"""
def __init__(self, window = 100):
self.window = window
self.detector = None
def convert(self, X):
n = self.window
X = pd.Series(X)
L = []
if n == 0:
df = X
else:
for i in range(n):
L.append(X.shift(i))
df = pd.concat(L, axis = 1)
df = df.iloc[n-1:]
return df
class tf_Stat:
'''statisitc feature extraction using the tf_feature package.
It calculates 763 features in total so it might be over complicated for some models.
Recommend to use for methods like Isolation Forest which randomly picks a feature
and then perform the classification. To use for other distance-based model like KNN,
LOF, CBLOF, etc, first train to pass a function that give weights to individual features so that
inconsequential features won't cloud the important ones (mean, variance, kurtosis, etc).
'''
def __init__(self, window = 100, step = 25):
self.window = window
self.step = step
self.detector = None
def convert(self, X):
window = self.window
step = self.step
pos = math.ceil(window/2)
#step <= window
length = X.shape[0]
Xd = pd.DataFrame(X)
Xd.columns = pd.Index(['x'], dtype='object')
Xd['id'] = 1
Xd['time'] = Xd.index
test = np.array(extract_features(Xd.iloc[0+pos-math.ceil(window/2):0+pos + math.floor(window/2)], column_id="id", column_sort="time", column_kind=None, column_value=None).fillna(0))
M = np.zeros((length - window, test.shape[1]+1 ))
i = 0
while i + window <= M.shape[0]:
M[i:i+step, 0]= X[pos + i: pos + i + step]
vector = np.array(extract_features(Xd.iloc[i+pos-math.ceil(window/2):i+pos + math.floor(window/2)], column_id="id", column_sort="time", column_kind=None, column_value=None).fillna(0))
M[i:i+step, 1:] = vector
i+= step
num = M.shape[0]
if i < num:
M[i: num, 0]= X[pos + i: pos + num]
M[i: num, 1:] = np.array(extract_features(Xd.iloc[i+pos-math.ceil(window/2):], column_id="id", column_sort="time", column_kind=None, column_value=None).fillna(0))
return M
class Stat:
'''statisitc feature extraction.
Features include [mean, variance, skewness, kurtosis, autocorrelation, maximum,
minimum, entropy, seasonality, hurst component, AR coef]
'''
def __init__(self, window = 100, data_step = 10, param = [{"coeff": 0, "k": 5}], lag = 1, freq = 720):
self.window = window
self.data_step = data_step
self.detector = None
self.param = param
self.lag = lag
self.freq =freq
if data_step > int(window/2):
raise ValueError('value step shoudm\'t be greater than half of the window')
def convert(self, X):
freq = self.freq
n = self.window
data_step = self.data_step
X = pd.Series(X)
L = []
if n == 0:
df = X
raise ValueError('window lenght is set to zero')
else:
for i in range(n):
L.append(X.shift(i))
df = pd.concat(L, axis = 1)
df = df.iloc[n:]
df2 = pd.concat(L[:data_step], axis = 1)
df = df.reset_index()
#value
x0 = df2[math.ceil(n/2) : - math.floor(n/2)].reset_index()
#mean
x1 = (df.mean(axis=1))
#variance
x2 = df.var(axis=1)
#AR-coef
self.ar_function = lambda x: self.ar_coefficient(x)
x3 = df.apply(self.ar_function, axis =1, result_type='expand' )
#autocorrelation
self.auto_function = lambda x: self.autocorrelation(x)
x4 = df.apply(self.auto_function, axis =1, result_type='expand' )
#kurtosis
x5 = (df.kurtosis(axis=1))
#skewness
x6 = (df.skew(axis=1))
#maximum
x7 = (df.max(axis=1))
#minimum
x8 = (df.min(axis=1))
#entropy
self.entropy_function = lambda x: self.sample_entropy(x)
x9 = df.apply(self.entropy_function, axis =1, result_type='expand')
#seasonality
result = seasonal_decompose(X, model='additive', freq = freq, extrapolate_trend='freq')
#seasonal
x10 = pd.Series(np.array(result.seasonal[math.ceil(n/2) : - math.floor(n/2)]))
#trend
x11 = pd.Series(np.array(result.trend[math.ceil(n/2) : - math.floor(n/2)]))
#resid
x12 = pd.Series(np.array(result.resid[math.ceil(n/2) : - math.floor(n/2)]))
#Hurst component
self.hurst_function = lambda x: self.hurst_f(x)
x13 = df.apply(self.hurst_function, axis =1, result_type='expand')
L = [x0, x1,x2,x3,x4,x5,x6,x7,x8,x9,x10,x11,x12, x13]
M = pd.concat(L, axis = 1)
M = M.drop(columns=['index'])
return M
def ar_coefficient(self, x):
"""
This feature calculator fits the unconditional maximum likelihood
of an autoregressive AR(k) process.
The k parameter is the maximum lag of the process
.. math::
X_{t}=\\varphi_0 +\\sum _{{i=1}}^{k}\\varphi_{i}X_{{t-i}}+\\varepsilon_{t}
For the configurations from param which should contain the maxlag "k" and such an AR process is calculated. Then
the coefficients :math:`\\varphi_{i}` whose index :math:`i` contained from "coeff" are returned.
:param x: the time series to calculate the feature of
:type x: numpy.ndarray
:param param: contains dictionaries {"coeff": x, "k": y} with x,y int
:type param: list
:return x: the different feature values
:return type: pandas.Series
"""
calculated_ar_params = {}
param = self.param
x_as_list = list(x)
res = {}
for parameter_combination in param:
k = parameter_combination["k"]
p = parameter_combination["coeff"]
column_name = "coeff_{}__k_{}".format(p, k)
if k not in calculated_ar_params:
try:
calculated_AR = AR(x_as_list)
calculated_ar_params[k] = calculated_AR.fit(maxlag=k, solver="mle").params
except (LinAlgError, ValueError):
calculated_ar_params[k] = [np.NaN] * k
mod = calculated_ar_params[k]
if p <= k:
try:
res[column_name] = mod[p]
except IndexError:
res[column_name] = 0
else:
res[column_name] = np.NaN
L = [(key, value) for key, value in res.items()]
L0 = []
for item in L:
L0.append(item[1])
return L0
def autocorrelation(self, x):
"""
Calculates the autocorrelation of the specified lag, according to the formula [1]
.. math::
\\frac{1}{(n-l)\\sigma^{2}} \\sum_{t=1}^{n-l}(X_{t}-\\mu )(X_{t+l}-\\mu)
where :math:`n` is the length of the time series :math:`X_i`, :math:`\\sigma^2` its variance and :math:`\\mu` its
mean. `l` denotes the lag.
.. rubric:: References
[1] https://en.wikipedia.org/wiki/Autocorrelation#Estimation
:param x: the time series to calculate the feature of
:type x: numpy.ndarray
:param lag: the lag
:type lag: int
:return: the value of this feature
:return type: float
"""
lag = self.lag
# This is important: If a series is passed, the product below is calculated
# based on the index, which corresponds to squaring the series.
if isinstance(x, pd.Series):
x = x.values
if len(x) < lag:
return np.nan
# Slice the relevant subseries based on the lag
y1 = x[:(len(x) - lag)]
y2 = x[lag:]
# Subtract the mean of the whole series x
x_mean = np.mean(x)
# The result is sometimes referred to as "covariation"
sum_product = np.sum((y1 - x_mean) * (y2 - x_mean))
# Return the normalized unbiased covariance
v = np.var(x)
if np.isclose(v, 0):
return np.NaN
else:
return sum_product / ((len(x) - lag) * v)
def _into_subchunks(self, x, subchunk_length, every_n=1):
"""
Split the time series x into subwindows of length "subchunk_length", starting every "every_n".
For example, the input data if [0, 1, 2, 3, 4, 5, 6] will be turned into a matrix
0 2 4
1 3 5
2 4 6
with the settings subchunk_length = 3 and every_n = 2
"""
len_x = len(x)
assert subchunk_length > 1
assert every_n > 0
# how often can we shift a window of size subchunk_length over the input?
num_shifts = (len_x - subchunk_length) // every_n + 1
shift_starts = every_n * np.arange(num_shifts)
indices = np.arange(subchunk_length)
indexer = np.expand_dims(indices, axis=0) + np.expand_dims(shift_starts, axis=1)
return np.asarray(x)[indexer]
def sample_entropy(self, x):
"""
Calculate and return sample entropy of x.
.. rubric:: References
| [1] http://en.wikipedia.org/wiki/Sample_Entropy
| [2] https://www.ncbi.nlm.nih.gov/pubmed/10843903?dopt=Abstract
:param x: the time series to calculate the feature of
:type x: numpy.ndarray
:return: the value of this feature
:return type: float
"""
x = np.array(x)
# if one of the values is NaN, we can not compute anything meaningful
if np.isnan(x).any():
return np.nan
m = 2 # common value for m, according to wikipedia...
tolerance = 0.2 * np.std(x) # 0.2 is a common value for r, according to wikipedia...
# Split time series and save all templates of length m
# Basically we turn [1, 2, 3, 4] into [1, 2], [2, 3], [3, 4]
xm = self._into_subchunks(x, m)
# Now calculate the maximum distance between each of those pairs
# np.abs(xmi - xm).max(axis=1)
# and check how many are below the tolerance.
# For speed reasons, we are not doing this in a nested for loop,
# but with numpy magic.
# Example:
# if x = [1, 2, 3]
# then xm = [[1, 2], [2, 3]]
# so we will substract xm from [1, 2] => [[0, 0], [-1, -1]]
# and from [2, 3] => [[1, 1], [0, 0]]
# taking the abs and max gives us:
# [0, 1] and [1, 0]
# as the diagonal elements are always 0, we substract 1.
B = np.sum([np.sum(np.abs(xmi - xm).max(axis=1) <= tolerance) - 1 for xmi in xm])
# Similar for computing A
xmp1 = self._into_subchunks(x, m + 1)
A = np.sum([np.sum(np.abs(xmi - xmp1).max(axis=1) <= tolerance) - 1 for xmi in xmp1])
# Return SampEn
return -np.log(A / B)
def hurst_f(self, x):
H,c, M = compute_Hc(x)
return [H, c]
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/vus/utils/metrics.py
================================================
from sklearn import metrics
import numpy as np
import math
# import matplotlib.pyplot as plt
class metricor:
def __init__(self, a = 1, probability = True, bias = 'flat', ):
self.a = a
self.probability = probability
self.bias = bias
def detect_model(self, model, label, contamination = 0.1, window = 100, is_A = False, is_threshold = True):
if is_threshold:
score = self.scale_threshold(model.decision_scores_, model._mu, model._sigma)
else:
score = self.scale_contamination(model.decision_scores_, contamination = contamination)
if is_A is False:
scoreX = np.zeros(len(score)+window)
scoreX[math.ceil(window/2): len(score)+window - math.floor(window/2)] = score
else:
scoreX = score
self.score_=scoreX
L = self.metric(label, scoreX)
return L
def labels_conv(self, preds):
'''return indices of predicted anomaly
'''
# p = np.zeros(len(preds))
index = np.where(preds >= 0.5)
return index[0]
def labels_conv_binary(self, preds):
'''return predicted label
'''
p = np.zeros(len(preds))
index = np.where(preds >= 0.5)
p[index[0]] = 1
return p
def w(self, AnomalyRange, p):
MyValue = 0
MaxValue = 0
start = AnomalyRange[0]
AnomalyLength = AnomalyRange[1] - AnomalyRange[0] + 1
for i in range(start, start +AnomalyLength):
bi = self.b(i, AnomalyLength)
MaxValue += bi
if i in p:
MyValue += bi
return MyValue/MaxValue
def Cardinality_factor(self, Anomolyrange, Prange):
score = 0
start = Anomolyrange[0]
end = Anomolyrange[1]
for i in Prange:
if i[0] >= start and i[0] <= end:
score +=1
elif start >= i[0] and start <= i[1]:
score += 1
elif end >= i[0] and end <= i[1]:
score += 1
elif start >= i[0] and end <= i[1]:
score += 1
if score == 0:
return 0
else:
return 1/score
def b(self, i, length):
bias = self.bias
if bias == 'flat':
return 1
elif bias == 'front-end bias':
return length - i + 1
elif bias == 'back-end bias':
return i
else:
if i <= length/2:
return i
else:
return length - i + 1
def scale_threshold(self, score, score_mu, score_sigma):
return (score >= (score_mu + 3*score_sigma)).astype(int)
def metric_new(self, label, score, plot_ROC=False, alpha=0.2,coeff=3):
'''input:
Real labels and anomaly score in prediction
output:
AUC,
Precision,
Recall,
F-score,
Range-precision,
Range-recall,
Range-Fscore,
Precison@k,
k is chosen to be # of outliers in real labels
'''
if np.sum(label) == 0:
print('All labels are 0. Label must have groud truth value for calculating AUC score.')
return None
if np.isnan(score).any() or score is None:
print('Score must not be none.')
return None
#area under curve
auc = metrics.roc_auc_score(label, score)
# plor ROC curve
if plot_ROC:
fpr, tpr, thresholds = metrics.roc_curve(label, score)
# display = metrics.RocCurveDisplay(fpr=fpr, tpr=tpr, roc_auc=auc)
# display.plot()
#precision, recall, F
preds = score > (np.mean(score)+coeff*np.std(score))
if np.sum(preds) == 0:
preds = score > (np.mean(score)+2*np.std(score))
if np.sum(preds) == 0:
preds = score > (np.mean(score)+1*np.std(score))
Precision, Recall, F, Support = metrics.precision_recall_fscore_support(label, preds, zero_division=0)
precision = Precision[1]
recall = Recall[1]
f = F[1]
#range anomaly
Rrecall, ExistenceReward, OverlapReward = self.range_recall_new(label, preds, alpha)
Rprecision = self.range_recall_new(preds, label, 0)[0]
if Rprecision + Rrecall==0:
Rf=0
else:
Rf = 2 * Rrecall * Rprecision / (Rprecision + Rrecall)
# top-k
k = int(np.sum(label))
threshold = np.percentile(score, 100 * (1-k/len(label)))
# precision_at_k = metrics.top_k_accuracy_score(label, score, k)
p_at_k = np.where(preds > threshold)[0]
TP_at_k = sum(label[p_at_k])
precision_at_k = TP_at_k/k
L = [auc, precision, recall, f, Rrecall, ExistenceReward, OverlapReward, Rprecision, Rf, precision_at_k]
if plot_ROC:
return L, fpr, tpr
return L
def metric_PR(self, label, score):
precision, recall, thresholds = metrics.precision_recall_curve(label, score)
# plt.figure()
# disp = metrics.PrecisionRecallDisplay(precision=precision, recall=recall)
# disp.plot()
AP = metrics.auc(recall, precision)
#AP = metrics.average_precision_score(label, score)
return precision, recall, AP
def range_recall_new(self, labels, preds, alpha):
p = np.where(preds == 1)[0] # positions of predicted label==1
range_pred = self.range_convers_new(preds)
range_label = self.range_convers_new(labels)
Nr = len(range_label) # total # of real anomaly segments
ExistenceReward = self.existence_reward(range_label, p)
OverlapReward = 0
for i in range_label:
OverlapReward += self.w(i, p) * self.Cardinality_factor(i, range_pred)
score = alpha * ExistenceReward + (1-alpha) * OverlapReward
if Nr != 0:
return score/Nr, ExistenceReward/Nr, OverlapReward/Nr
else:
return 0,0,0
def range_convers_new(self, label):
'''
input: arrays of binary values
output: list of ordered pair [[a0,b0], [a1,b1]... ] of the inputs
'''
L = []
i = 0
j = 0
while j < len(label):
# print(i)
while label[i] == 0:
i+=1
if i >= len(label):
break
j = i+1
# print('j'+str(j))
if j >= len(label):
if j==len(label):
L.append((i,j-1))
break
while label[j] != 0:
j+=1
if j >= len(label):
L.append((i,j-1))
break
if j >= len(label):
break
L.append((i, j-1))
i = j
return L
def existence_reward(self, labels, preds):
'''
labels: list of ordered pair
preds predicted data
'''
score = 0
for i in labels:
if np.sum(np.multiply(preds <= i[1], preds >= i[0])) > 0:
score += 1
return score
def num_nonzero_segments(self, x):
count=0
if x[0]>0:
count+=1
for i in range(1, len(x)):
if x[i]>0 and x[i-1]==0:
count+=1
return count
def extend_postive_range(self, x, window=5):
label = x.copy().astype(float)
L = self.range_convers_new(label) # index of non-zero segments
length = len(label)
for k in range(len(L)):
s = L[k][0]
e = L[k][1]
x1 = np.arange(e,min(e+window//2,length))
label[x1] += np.sqrt(1 - (x1-e)/(window))
x2 = np.arange(max(s-window//2,0),s)
label[x2] += np.sqrt(1 - (s-x2)/(window))
label = np.minimum(np.ones(length), label)
return label
def extend_postive_range_individual(self, x, percentage=0.2):
label = x.copy().astype(float)
L = self.range_convers_new(label) # index of non-zero segments
length = len(label)
for k in range(len(L)):
s = L[k][0]
e = L[k][1]
l0 = int((e-s+1)*percentage)
x1 = np.arange(e,min(e+l0,length))
label[x1] += np.sqrt(1 - (x1-e)/(2*l0))
x2 = np.arange(max(s-l0,0),s)
label[x2] += np.sqrt(1 - (s-x2)/(2*l0))
label = np.minimum(np.ones(length), label)
return label
def TPR_FPR_RangeAUC(self, labels, pred, P, L):
product = labels * pred
TP = np.sum(product)
# recall = min(TP/P,1)
P_new = (P+np.sum(labels))/2 # so TPR is neither large nor small
# P_new = np.sum(labels)
recall = min(TP/P_new,1)
# recall = TP/np.sum(labels)
# print('recall '+str(recall))
existence = 0
for seg in L:
if np.sum(product[seg[0]:(seg[1]+1)])>0:
existence += 1
if len(L) == 0:
existence_ratio = existence
else:
existence_ratio = existence/len(L)
# print(existence_ratio)
# TPR_RangeAUC = np.sqrt(recall*existence_ratio)
# print(existence_ratio)
TPR_RangeAUC = recall*existence_ratio
FP = np.sum(pred) - TP
# TN = np.sum((1-pred) * (1-labels))
# FPR_RangeAUC = FP/(FP+TN)
N_new = len(labels) - P_new
FPR_RangeAUC = FP/N_new
Precision_RangeAUC = TP/np.sum(pred)
return TPR_RangeAUC, FPR_RangeAUC, Precision_RangeAUC
def RangeAUC(self, labels, score, window=0, percentage=0, plot_ROC=False, AUC_type='window'):
# AUC_type='window'/'percentage'
score_sorted = -np.sort(-score)
P = np.sum(labels)
# print(np.sum(labels))
if AUC_type=='window':
labels = self.extend_postive_range(labels, window=window)
else:
labels = self.extend_postive_range_individual(labels, percentage=percentage)
# print(np.sum(labels))
L = self.range_convers_new(labels)
TPR_list = [0]
FPR_list = [0]
Precision_list = [1]
for i in np.linspace(0, len(score)-1, 250).astype(int):
threshold = score_sorted[i]
# print('thre='+str(threshold))
pred = score>= threshold
TPR, FPR, Precision = self.TPR_FPR_RangeAUC(labels, pred, P,L)
TPR_list.append(TPR)
FPR_list.append(FPR)
Precision_list.append(Precision)
TPR_list.append(1)
FPR_list.append(1) # otherwise, range-AUC will stop earlier than (1,1)
tpr = np.array(TPR_list)
fpr = np.array(FPR_list)
prec = np.array(Precision_list)
width = fpr[1:] - fpr[:-1]
height = (tpr[1:] + tpr[:-1])/2
AUC_range = np.sum(width*height)
width_PR = tpr[1:-1] - tpr[:-2]
height_PR = (prec[1:] + prec[:-1])/2
AP_range = np.sum(width_PR*height_PR)
if plot_ROC:
return AUC_range, AP_range, fpr, tpr, prec
return AUC_range
# TPR_FPR_window
def RangeAUC_volume(self, labels_original, score, windowSize):
score_sorted = -np.sort(-score)
tpr_3d=[]
fpr_3d=[]
prec_3d=[]
auc_3d=[]
ap_3d=[]
window_3d = np.arange(0, windowSize+1, 1)
P = np.sum(labels_original)
for window in window_3d:
labels = self.extend_postive_range(labels_original, window)
# print(np.sum(labels))
L = self.range_convers_new(labels)
TPR_list = [0]
FPR_list = [0]
Precision_list = [1]
for i in np.linspace(0, len(score)-1, 250).astype(int):
threshold = score_sorted[i]
# print('thre='+str(threshold))
pred = score>= threshold
TPR, FPR, Precision = self.TPR_FPR_RangeAUC(labels, pred, P,L)
TPR_list.append(TPR)
FPR_list.append(FPR)
Precision_list.append(Precision)
TPR_list.append(1)
FPR_list.append(1) # otherwise, range-AUC will stop earlier than (1,1)
tpr = np.array(TPR_list)
fpr = np.array(FPR_list)
prec = np.array(Precision_list)
tpr_3d.append(tpr)
fpr_3d.append(fpr)
prec_3d.append(prec)
width = fpr[1:] - fpr[:-1]
height = (tpr[1:] + tpr[:-1])/2
AUC_range = np.sum(width*height)
auc_3d.append(AUC_range)
width_PR = tpr[1:-1] - tpr[:-2]
height_PR = (prec[1:] + prec[:-1])/2
AP_range = np.sum(width_PR*height_PR)
ap_3d.append(AP_range)
return tpr_3d, fpr_3d, prec_3d, window_3d, sum(auc_3d)/len(window_3d), sum(ap_3d)/len(window_3d)
def generate_curve(label,score,slidingWindow):
tpr_3d, fpr_3d, prec_3d, window_3d, avg_auc_3d, avg_ap_3d = metricor().RangeAUC_volume(labels_original=label, score=score, windowSize=1*slidingWindow)
X = np.array(tpr_3d).reshape(1,-1).ravel()
X_ap = np.array(tpr_3d)[:,:-1].reshape(1,-1).ravel()
Y = np.array(fpr_3d).reshape(1,-1).ravel()
W = np.array(prec_3d).reshape(1,-1).ravel()
Z = np.repeat(window_3d, len(tpr_3d[0]))
Z_ap = np.repeat(window_3d, len(tpr_3d[0])-1)
return Y, Z, X, X_ap, W, Z_ap,avg_auc_3d, avg_ap_3d
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/metrics/vus/utils/slidingWindows.py
================================================
from statsmodels.tsa.stattools import acf
from scipy.signal import argrelextrema
import numpy as np
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
# determine sliding window (period) based on ACF
def find_length(data):
if len(data.shape)>1:
return 0
data = data[:min(20000, len(data))]
base = 3
auto_corr = acf(data, nlags=400, fft=True)[base:]
local_max = argrelextrema(auto_corr, np.greater)[0]
try:
max_local_max = np.argmax([auto_corr[lcm] for lcm in local_max])
if local_max[max_local_max]<3 or local_max[max_local_max]>300:
return 125
return local_max[max_local_max]+base
except:
return 125
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/models/AnomalyTransformer.py
================================================
import numpy as np
import math
from math import sqrt
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.utils import weight_norm
import math
class PositionalEmbedding(nn.Module):
def __init__(self, d_model, max_len=5000):
super(PositionalEmbedding, self).__init__()
# Compute the positional encodings once in log space.
pe = torch.zeros(max_len, d_model).float()
pe.require_grad = False
position = torch.arange(0, max_len).float().unsqueeze(1)
div_term = (torch.arange(0, d_model, 2).float() * -(math.log(10000.0) / d_model)).exp()
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)
def forward(self, x):
return self.pe[:, :x.size(1)]
class TokenEmbedding(nn.Module):
def __init__(self, c_in, d_model):
super(TokenEmbedding, self).__init__()
padding = 1 if torch.__version__ >= '1.5.0' else 2
self.tokenConv = nn.Conv1d(in_channels=c_in, out_channels=d_model,
kernel_size=3, padding=padding, padding_mode='circular', bias=False)
for m in self.modules():
if isinstance(m, nn.Conv1d):
nn.init.kaiming_normal_(m.weight, mode='fan_in', nonlinearity='leaky_relu')
def forward(self, x):
x = self.tokenConv(x.permute(0, 2, 1)).transpose(1, 2)
return x
class DataEmbedding(nn.Module):
def __init__(self, c_in, d_model, dropout=0.0):
super(DataEmbedding, self).__init__()
self.value_embedding = TokenEmbedding(c_in=c_in, d_model=d_model)
self.position_embedding = PositionalEmbedding(d_model=d_model)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x):
x = self.value_embedding(x) + self.position_embedding(x)
return self.dropout(x)
class TriangularCausalMask():
def __init__(self, B, L, device="cpu"):
mask_shape = [B, 1, L, L]
with torch.no_grad():
self._mask = torch.triu(torch.ones(mask_shape, dtype=torch.bool), diagonal=1).to(device)
@property
def mask(self):
return self._mask
class AnomalyAttention(nn.Module):
def __init__(self, win_size, mask_flag=True, scale=None, attention_dropout=0.0, output_attention=False, cud_device=None):
super(AnomalyAttention, self).__init__()
self.scale = scale
self.mask_flag = mask_flag
self.output_attention = output_attention
self.dropout = nn.Dropout(attention_dropout)
self.device = cud_device
window_size = win_size
self.distances = torch.zeros((window_size, window_size)).to(self.device)
for i in range(window_size):
for j in range(window_size):
self.distances[i][j] = abs(i - j)
def forward(self, queries, keys, values, sigma, attn_mask):
B, L, H, E = queries.shape
_, S, _, D = values.shape
scale = self.scale or 1. / sqrt(E)
scores = torch.einsum("blhe,bshe->bhls", queries, keys)
if self.mask_flag:
if attn_mask is None:
attn_mask = TriangularCausalMask(B, L, device=queries.device)
scores.masked_fill_(attn_mask.mask, -np.inf)
attn = scale * scores
sigma = sigma.transpose(1, 2) # B L H -> B H L
window_size = attn.shape[-1]
sigma = torch.sigmoid(sigma * 5) + 1e-5
sigma = torch.pow(3, sigma) - 1
sigma = sigma.unsqueeze(-1).repeat(1, 1, 1, window_size) # B H L L
prior = self.distances.unsqueeze(0).unsqueeze(0).repeat(sigma.shape[0], sigma.shape[1], 1, 1).to(self.device)
prior = 1.0 / (math.sqrt(2 * math.pi) * sigma) * torch.exp(-prior ** 2 / 2 / (sigma ** 2))
series = self.dropout(torch.softmax(attn, dim=-1))
V = torch.einsum("bhls,bshd->blhd", series, values)
if self.output_attention:
return (V.contiguous(), series, prior, sigma)
else:
return (V.contiguous(), None)
class AttentionLayer(nn.Module):
def __init__(self, attention, d_model, n_heads, d_keys=None,
d_values=None):
super(AttentionLayer, self).__init__()
d_keys = d_keys or (d_model // n_heads)
d_values = d_values or (d_model // n_heads)
self.norm = nn.LayerNorm(d_model)
self.inner_attention = attention
self.query_projection = nn.Linear(d_model,
d_keys * n_heads)
self.key_projection = nn.Linear(d_model,
d_keys * n_heads)
self.value_projection = nn.Linear(d_model,
d_values * n_heads)
self.sigma_projection = nn.Linear(d_model,
n_heads)
self.out_projection = nn.Linear(d_values * n_heads, d_model)
self.n_heads = n_heads
def forward(self, queries, keys, values, attn_mask):
B, L, _ = queries.shape
_, S, _ = keys.shape
H = self.n_heads
x = queries
queries = self.query_projection(queries).view(B, L, H, -1)
keys = self.key_projection(keys).view(B, S, H, -1)
values = self.value_projection(values).view(B, S, H, -1)
sigma = self.sigma_projection(x).view(B, L, H)
out, series, prior, sigma = self.inner_attention(
queries,
keys,
values,
sigma,
attn_mask
)
out = out.view(B, L, -1)
return self.out_projection(out), series, prior, sigma
class EncoderLayer(nn.Module):
def __init__(self, attention, d_model, d_ff=None, dropout=0.1, activation="relu"):
super(EncoderLayer, self).__init__()
d_ff = d_ff or 4 * d_model
self.attention = attention
self.conv1 = nn.Conv1d(in_channels=d_model, out_channels=d_ff, kernel_size=1)
self.conv2 = nn.Conv1d(in_channels=d_ff, out_channels=d_model, kernel_size=1)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(dropout)
self.activation = F.relu if activation == "relu" else F.gelu
def forward(self, x, attn_mask=None):
new_x, attn, mask, sigma = self.attention(
x, x, x,
attn_mask=attn_mask
)
x = x + self.dropout(new_x)
y = x = self.norm1(x)
y = self.dropout(self.activation(self.conv1(y.transpose(-1, 1))))
y = self.dropout(self.conv2(y).transpose(-1, 1))
return self.norm2(x + y), attn, mask, sigma
class Encoder(nn.Module):
def __init__(self, attn_layers, norm_layer=None):
super(Encoder, self).__init__()
self.attn_layers = nn.ModuleList(attn_layers)
self.norm = norm_layer
def forward(self, x, attn_mask=None):
# x [B, L, D]
series_list = []
prior_list = []
sigma_list = []
for attn_layer in self.attn_layers:
x, series, prior, sigma = attn_layer(x, attn_mask=attn_mask)
series_list.append(series)
prior_list.append(prior)
sigma_list.append(sigma)
if self.norm is not None:
x = self.norm(x)
return x, series_list, prior_list, sigma_list
class AnomalyTransformer(nn.Module):
def __init__(self, win_size, enc_in, c_out, d_model=512, n_heads=8, e_layers=3, d_ff=512,
dropout=0.0, activation='gelu', output_attention=True, cud_device=None):
super(AnomalyTransformer, self).__init__()
self.output_attention = output_attention
# Encoding
self.embedding = DataEmbedding(enc_in, d_model, dropout)
# Encoder
self.encoder = Encoder(
[
EncoderLayer(
AttentionLayer(
AnomalyAttention(win_size, False, attention_dropout=dropout, output_attention=output_attention, cud_device=cud_device),
d_model, n_heads),
d_model,
d_ff,
dropout=dropout,
activation=activation
) for l in range(e_layers)
],
norm_layer=torch.nn.LayerNorm(d_model)
)
self.projection = nn.Linear(d_model, c_out, bias=True)
def forward(self, x):
enc_out = self.embedding(x)
enc_out, series, prior, sigmas = self.encoder(enc_out)
enc_out = self.projection(enc_out)
if self.output_attention:
return enc_out, series, prior, sigmas
else:
return enc_out # [B, L, D]
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/models/DCdetector.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
from einops import rearrange
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import math
from math import sqrt
import os
from einops import rearrange, reduce, repeat
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.utils import weight_norm
import math
from tkinter import _flatten
class DAC_structure(nn.Module):
def __init__(self, win_size, patch_size, channel, mask_flag=True, scale=None, attention_dropout=0.05,
output_attention=False):
super(DAC_structure, self).__init__()
self.scale = scale
self.mask_flag = mask_flag
self.output_attention = output_attention
self.dropout = nn.Dropout(attention_dropout)
self.window_size = win_size
self.patch_size = patch_size
self.channel = channel
def forward(self, queries_patch_size, queries_patch_num, keys_patch_size, keys_patch_num, values, patch_index,
attn_mask):
# Patch-wise Representation
B, L, H, E = queries_patch_size.shape # batch_size*channel, patch_num, n_head, d_model/n_head
scale_patch_size = self.scale or 1. / sqrt(E)
scores_patch_size = torch.einsum("blhe,bshe->bhls", queries_patch_size,
keys_patch_size) # batch*ch, nheads, p_num, p_num
attn_patch_size = scale_patch_size * scores_patch_size
series_patch_size = self.dropout(torch.softmax(attn_patch_size, dim=-1)) # B*D_model H N N
# In-patch Representation
B, L, H, E = queries_patch_num.shape # batch_size*channel, patch_size, n_head, d_model/n_head
scale_patch_num = self.scale or 1. / sqrt(E)
scores_patch_num = torch.einsum("blhe,bshe->bhls", queries_patch_num,
keys_patch_num) # batch*ch, nheads, p_size, p_size
attn_patch_num = scale_patch_num * scores_patch_num
series_patch_num = self.dropout(torch.softmax(attn_patch_num, dim=-1)) # B*D_model H S S
# Upsampling
series_patch_size = repeat(series_patch_size, 'b l m n -> b l (m repeat_m) (n repeat_n)',
repeat_m=self.patch_size[patch_index], repeat_n=self.patch_size[patch_index])
series_patch_num = series_patch_num.repeat(1, 1, self.window_size // self.patch_size[patch_index],
self.window_size // self.patch_size[patch_index])
series_patch_size = reduce(series_patch_size, '(b reduce_b) l m n-> b l m n', 'mean', reduce_b=self.channel)
series_patch_num = reduce(series_patch_num, '(b reduce_b) l m n-> b l m n', 'mean', reduce_b=self.channel)
if self.output_attention:
return series_patch_size, series_patch_num
else:
return (None)
class AttentionLayer(nn.Module):
def __init__(self, attention, d_model, patch_size, channel, n_heads, win_size, d_keys=None, d_values=None):
super(AttentionLayer, self).__init__()
d_keys = d_keys or (d_model // n_heads)
d_values = d_values or (d_model // n_heads)
self.norm = nn.LayerNorm(d_model)
self.inner_attention = attention
self.patch_size = patch_size
self.channel = channel
self.window_size = win_size
self.n_heads = n_heads
self.patch_query_projection = nn.Linear(d_model, d_keys * n_heads)
self.patch_key_projection = nn.Linear(d_model, d_keys * n_heads)
self.out_projection = nn.Linear(d_values * n_heads, d_model)
self.value_projection = nn.Linear(d_model, d_values * n_heads)
def forward(self, x_patch_size, x_patch_num, x_ori, patch_index, attn_mask):
# patch_size
B, L, M = x_patch_size.shape
H = self.n_heads
queries_patch_size, keys_patch_size = x_patch_size, x_patch_size
queries_patch_size = self.patch_query_projection(queries_patch_size).view(B, L, H, -1)
keys_patch_size = self.patch_key_projection(keys_patch_size).view(B, L, H, -1)
# patch_num
B, L, M = x_patch_num.shape
queries_patch_num, keys_patch_num = x_patch_num, x_patch_num
queries_patch_num = self.patch_query_projection(queries_patch_num).view(B, L, H, -1)
keys_patch_num = self.patch_key_projection(keys_patch_num).view(B, L, H, -1)
# x_ori
B, L, _ = x_ori.shape
values = self.value_projection(x_ori).view(B, L, H, -1)
series, prior = self.inner_attention(
queries_patch_size, queries_patch_num,
keys_patch_size, keys_patch_num,
values, patch_index,
attn_mask
)
return series, prior
class PositionalEmbedding(nn.Module):
def __init__(self, d_model, max_len=5000):
super(PositionalEmbedding, self).__init__()
# Compute the positional encodings once in log space.
pe = torch.zeros(max_len, d_model).float()
pe.require_grad = False
position = torch.arange(0, max_len).float().unsqueeze(1)
div_term = (torch.arange(0, d_model, 2).float() * -(math.log(10000.0) / d_model)).exp()
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)
def forward(self, x):
return self.pe[:, :x.size(1)]
class TokenEmbedding(nn.Module):
def __init__(self, c_in, d_model):
super(TokenEmbedding, self).__init__()
padding = 1 if torch.__version__ >= '1.5.0' else 2
self.tokenConv = nn.Conv1d(in_channels=c_in, out_channels=d_model,
kernel_size=3, padding=padding, padding_mode='circular', bias=False)
for m in self.modules():
if isinstance(m, nn.Conv1d):
nn.init.kaiming_normal_(m.weight, mode='fan_in', nonlinearity='leaky_relu')
def forward(self, x):
x = self.tokenConv(x.permute(0, 2, 1)).transpose(1, 2)
return x
class DataEmbedding(nn.Module):
def __init__(self, c_in, d_model, dropout=0.05):
super(DataEmbedding, self).__init__()
self.value_embedding = TokenEmbedding(c_in=c_in, d_model=d_model)
self.position_embedding = PositionalEmbedding(d_model=d_model)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x):
x = self.value_embedding(x) + self.position_embedding(x)
return self.dropout(x)
class RevIN(nn.Module):
def __init__(self, num_features: int, eps=1e-5, affine=True):
"""
:param num_features: the number of features or channels
:param eps: a value added for numerical stability
:param affine: if True, RevIN has learnable affine parameters
"""
super(RevIN, self).__init__()
self.num_features = num_features
self.eps = eps
self.affine = affine
if self.affine:
self._init_params()
def forward(self, x, mode: str):
if mode == 'norm':
self._get_statistics(x)
x = self._normalize(x)
elif mode == 'denorm':
x = self._denormalize(x)
else:
raise NotImplementedError
return x
def _init_params(self):
# initialize RevIN params: (C,)
self.affine_weight = torch.ones(self.num_features)
self.affine_bias = torch.zeros(self.num_features)
self.affine_weight = self.affine_weight.to(
device=torch.device('cuda:0' if torch.cuda.is_available() else 'cpu'))
self.affine_bias = self.affine_bias.to(device=torch.device('cuda:0' if torch.cuda.is_available() else 'cpu'))
def _get_statistics(self, x):
dim2reduce = tuple(range(1, x.ndim - 1))
self.mean = torch.mean(x, dim=dim2reduce, keepdim=True).detach()
self.stdev = torch.sqrt(torch.var(x, dim=dim2reduce, keepdim=True, unbiased=False) + self.eps).detach()
def _normalize(self, x):
x = x - self.mean
x = x / self.stdev
if self.affine:
x = x * self.affine_weight
x = x + self.affine_bias
return x
def _denormalize(self, x):
if self.affine:
x = x - self.affine_bias
x = x / (self.affine_weight + self.eps * self.eps)
x = x * self.stdev
x = x + self.mean
return x
class Encoder(nn.Module):
def __init__(self, attn_layers, norm_layer=None):
super(Encoder, self).__init__()
self.attn_layers = nn.ModuleList(attn_layers)
self.norm = norm_layer
def forward(self, x_patch_size, x_patch_num, x_ori, patch_index, attn_mask=None):
series_list = []
prior_list = []
for attn_layer in self.attn_layers:
series, prior = attn_layer(x_patch_size, x_patch_num, x_ori, patch_index, attn_mask=attn_mask)
series_list.append(series)
prior_list.append(prior)
return series_list, prior_list
class DCdetector(nn.Module):
def __init__(self, win_size, enc_in, c_out, n_heads=1, d_model=256, e_layers=3, patch_size=[3, 5, 7], channel=55,
d_ff=512, dropout=0.0, activation='gelu', output_attention=True):
super(DCdetector, self).__init__()
self.output_attention = output_attention
self.patch_size = patch_size
self.channel = channel
self.win_size = win_size
# Patching List
self.embedding_patch_size = nn.ModuleList()
self.embedding_patch_num = nn.ModuleList()
for i, patchsize in enumerate(self.patch_size):
self.embedding_patch_size.append(DataEmbedding(patchsize, d_model, dropout))
self.embedding_patch_num.append(DataEmbedding(self.win_size // patchsize, d_model, dropout))
self.embedding_window_size = DataEmbedding(enc_in, d_model, dropout)
# Dual Attention Encoder
self.encoder = Encoder(
[
AttentionLayer(
DAC_structure(win_size, patch_size, channel, False, attention_dropout=dropout,
output_attention=output_attention),
d_model, patch_size, channel, n_heads, win_size) for l in range(e_layers)
],
norm_layer=torch.nn.LayerNorm(d_model)
)
self.projection = nn.Linear(d_model, c_out, bias=True)
def forward(self, x):
B, L, M = x.shape # Batch win_size channel
series_patch_mean = []
prior_patch_mean = []
revin_layer = RevIN(num_features=M)
# Instance Normalization Operation
x = revin_layer(x, 'norm')
x_ori = self.embedding_window_size(x)
# Mutil-scale Patching Operation
for patch_index, patchsize in enumerate(self.patch_size):
x_patch_size, x_patch_num = x, x
x_patch_size = rearrange(x_patch_size, 'b l m -> b m l') # Batch channel win_size
x_patch_num = rearrange(x_patch_num, 'b l m -> b m l') # Batch channel win_size
x_patch_size = rearrange(x_patch_size, 'b m (n p) -> (b m) n p', p=patchsize)
x_patch_size = self.embedding_patch_size[patch_index](x_patch_size)
x_patch_num = rearrange(x_patch_num, 'b m (p n) -> (b m) p n', p=patchsize)
x_patch_num = self.embedding_patch_num[patch_index](x_patch_num)
series, prior = self.encoder(x_patch_size, x_patch_num, x_ori, patch_index)
series_patch_mean.append(series), prior_patch_mean.append(prior)
series_patch_mean = list(_flatten(series_patch_mean))
prior_patch_mean = list(_flatten(prior_patch_mean))
if self.output_attention:
return series_patch_mean, prior_patch_mean
else:
return None
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/models/GPT4TS.py
================================================
import torch.nn.functional as F
from transformers.models.gpt2.modeling_gpt2 import GPT2Model
from einops import rearrange
import torch
import torch.nn as nn
import torch.nn.functional as F
import math
class PositionalEmbedding(nn.Module):
def __init__(self, d_model, max_len=5000):
super(PositionalEmbedding, self).__init__()
# Compute the positional encodings once in log space.
pe = torch.zeros(max_len, d_model).float()
pe.require_grad = False
position = torch.arange(0, max_len).float().unsqueeze(1)
div_term = (torch.arange(0, d_model, 2).float()
* -(math.log(10000.0) / d_model)).exp()
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)
def forward(self, x):
return self.pe[:, :x.size(1)]
class TokenEmbedding(nn.Module):
def __init__(self, c_in, d_model):
super(TokenEmbedding, self).__init__()
padding = 1 if torch.__version__ >= '1.5.0' else 2
self.tokenConv = nn.Conv1d(in_channels=c_in, out_channels=d_model,
kernel_size=3, padding=padding, padding_mode='circular', bias=False)
for m in self.modules():
if isinstance(m, nn.Conv1d):
nn.init.kaiming_normal_(
m.weight, mode='fan_in', nonlinearity='leaky_relu')
def forward(self, x):
x = self.tokenConv(x.permute(0, 2, 1)).transpose(1, 2)
return x
class FixedEmbedding(nn.Module):
def __init__(self, c_in, d_model):
super(FixedEmbedding, self).__init__()
w = torch.zeros(c_in, d_model).float()
w.require_grad = False
position = torch.arange(0, c_in).float().unsqueeze(1)
div_term = (torch.arange(0, d_model, 2).float()
* -(math.log(10000.0) / d_model)).exp()
w[:, 0::2] = torch.sin(position * div_term)
w[:, 1::2] = torch.cos(position * div_term)
self.emb = nn.Embedding(c_in, d_model)
self.emb.weight = nn.Parameter(w, requires_grad=False)
def forward(self, x):
return self.emb(x).detach()
class TemporalEmbedding(nn.Module):
def __init__(self, d_model, embed_type='fixed', freq='h'):
super(TemporalEmbedding, self).__init__()
minute_size = 4
hour_size = 24
weekday_size = 7
day_size = 32
month_size = 13
Embed = FixedEmbedding if embed_type == 'fixed' else nn.Embedding
if freq == 't':
self.minute_embed = Embed(minute_size, d_model)
self.hour_embed = Embed(hour_size, d_model)
self.weekday_embed = Embed(weekday_size, d_model)
self.day_embed = Embed(day_size, d_model)
self.month_embed = Embed(month_size, d_model)
def forward(self, x):
x = x.long()
minute_x = self.minute_embed(x[:, :, 4]) if hasattr(
self, 'minute_embed') else 0.
hour_x = self.hour_embed(x[:, :, 3])
weekday_x = self.weekday_embed(x[:, :, 2])
day_x = self.day_embed(x[:, :, 1])
month_x = self.month_embed(x[:, :, 0])
return hour_x + weekday_x + day_x + month_x + minute_x
class TimeFeatureEmbedding(nn.Module):
def __init__(self, d_model, embed_type='timeF', freq='h'):
super(TimeFeatureEmbedding, self).__init__()
freq_map = {'h': 4, 't': 5, 's': 6,
'm': 1, 'a': 1, 'w': 2, 'd': 3, 'b': 3}
d_inp = freq_map[freq]
self.embed = nn.Linear(d_inp, d_model, bias=False)
def forward(self, x):
return self.embed(x)
class DataEmbedding(nn.Module):
def __init__(self, c_in, d_model, embed_type='fixed', freq='h', dropout=0.1):
super(DataEmbedding, self).__init__()
self.value_embedding = TokenEmbedding(c_in=c_in, d_model=d_model)
self.position_embedding = PositionalEmbedding(d_model=d_model)
self.temporal_embedding = TemporalEmbedding(d_model=d_model, embed_type=embed_type,
freq=freq) if embed_type != 'timeF' else TimeFeatureEmbedding(
d_model=d_model, embed_type=embed_type, freq=freq)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x, x_mark):
if x_mark is None:
x = self.value_embedding(x) + self.position_embedding(x)
else:
x = self.value_embedding(
x) + self.temporal_embedding(x_mark) + self.position_embedding(x)
return self.dropout(x)
class DataEmbedding_wo_pos(nn.Module):
def __init__(self, c_in, d_model, embed_type='fixed', freq='h', dropout=0.1):
super(DataEmbedding_wo_pos, self).__init__()
self.value_embedding = TokenEmbedding(c_in=c_in, d_model=d_model)
self.position_embedding = PositionalEmbedding(d_model=d_model)
self.temporal_embedding = TemporalEmbedding(d_model=d_model, embed_type=embed_type,
freq=freq) if embed_type != 'timeF' else TimeFeatureEmbedding(
d_model=d_model, embed_type=embed_type, freq=freq)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x, x_mark):
if x_mark is None:
x = self.value_embedding(x)
else:
x = self.value_embedding(x) + self.temporal_embedding(x_mark)
return self.dropout(x)
class Model(nn.Module):
def __init__(self, configs):
super(Model, self).__init__()
self.is_ln = configs.ln
self.task_name = configs.task_name
self.pred_len = configs.pred_len
self.seq_len = configs.seq_len
self.patch_size = configs.patch_size
self.stride = configs.stride
self.seq_len = configs.seq_len
self.d_ff = configs.d_ff
self.patch_num = (configs.seq_len + self.pred_len - self.patch_size) // self.stride + 1
self.padding_patch_layer = nn.ReplicationPad1d((0, self.stride))
self.patch_num += 1
self.enc_embedding = DataEmbedding(configs.enc_in * self.patch_size, configs.d_model, configs.embed,
configs.freq,
configs.dropout)
# self.gpt2 = GPT2Model.from_pretrained('gpt2', output_attentions=True, output_hidden_states=True)
import os
# 检查路径是否存在,如果不存在则赋值为新的路径
if not os.path.exists("/dev_data/lz/gpt2"):
self.gpt2 = GPT2Model.from_pretrained('/SSD/lz/gpt2', output_attentions=True, output_hidden_states=True)
else:
self.gpt2 = GPT2Model.from_pretrained('/dev_data/lz/gpt2', output_attentions=True,
output_hidden_states=True)
self.gpt2.h = self.gpt2.h[:configs.gpt_layers]
for i, (name, param) in enumerate(self.gpt2.named_parameters()):
if 'ln' in name or 'wpe' in name: # or 'mlp' in name:
param.requires_grad = True
elif 'mlp' in name and configs.mlp == 1:
param.requires_grad = True
else:
param.requires_grad = False
if configs.use_gpu:
device = torch.device('cuda:{}'.format(0))
self.gpt2.to(device=device)
# self.in_layer = nn.Linear(configs.patch_size, configs.d_model)
if self.task_name == 'long_term_forecast' or self.task_name == 'short_term_forecast':
self.predict_linear_pre = nn.Linear(self.seq_len, self.pred_len + self.seq_len)
self.predict_linear = nn.Linear(self.patch_size, configs.enc_in)
self.ln = nn.LayerNorm(configs.d_ff)
self.out_layer = nn.Linear(configs.d_ff, configs.c_out)
if self.task_name == 'imputation':
self.ln_proj = nn.LayerNorm(configs.d_model)
self.out_layer = nn.Linear(
configs.d_model,
configs.c_out,
bias=True)
if self.task_name == 'anomaly_detection':
self.ln_proj = nn.LayerNorm(configs.d_ff)
self.out_layer = nn.Linear(
configs.d_ff,
configs.c_out,
bias=True)
if self.task_name == 'classification':
self.act = F.gelu
self.dropout = nn.Dropout(0.1)
self.ln_proj = nn.LayerNorm(configs.d_model * self.patch_num)
self.out_layer = nn.Linear(configs.d_model * self.patch_num, configs.num_class)
def forward(self, x_enc, x_mark_enc, x_dec, x_mark_dec, mask=None):
if self.task_name == 'long_term_forecast' or self.task_name == 'short_term_forecast':
dec_out = self.forecast(x_enc, x_mark_enc, x_dec, x_mark_dec)
return dec_out[:, -self.pred_len:, :] # [B, L, D]
if self.task_name == 'imputation':
dec_out = self.imputation(
x_enc, x_mark_enc, x_dec, x_mark_dec, mask)
return dec_out # [B, L, D]
if self.task_name == 'anomaly_detection':
dec_out = self.anomaly_detection(x_enc)
return dec_out # [B, L, D]
if self.task_name == 'classification':
dec_out = self.classification(x_enc, x_mark_enc)
return dec_out # [B, N]
return None
def imputation(self, x_enc, x_mark_enc, x_dec, x_mark_dec, mask):
B, L, M = x_enc.shape
# Normalization from Non-stationary Transformer
means = torch.sum(x_enc, dim=1) / torch.sum(mask == 1, dim=1)
means = means.unsqueeze(1).detach()
x_enc = x_enc - means
x_enc = x_enc.masked_fill(mask == 0, 0)
stdev = torch.sqrt(torch.sum(x_enc * x_enc, dim=1) /
torch.sum(mask == 1, dim=1) + 1e-5)
stdev = stdev.unsqueeze(1).detach()
x_enc /= stdev
enc_out = self.enc_embedding(x_enc, x_mark_enc) # [B,T,C]
outputs = self.gpt2(inputs_embeds=enc_out).last_hidden_state
outputs = self.ln_proj(outputs)
dec_out = self.out_layer(outputs)
# De-Normalization from Non-stationary Transformer
dec_out = dec_out * \
(stdev[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
dec_out = dec_out + \
(means[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
return dec_out
def forecast(self, x_enc, x_mark_enc, x_dec, x_mark_dec):
B, L, M = x_enc.shape
# Normalization from Non-stationary Transformer
means = x_enc.mean(1, keepdim=True).detach()
x_enc = x_enc - means
stdev = torch.sqrt(
torch.var(x_enc, dim=1, keepdim=True, unbiased=False) + 1e-5)
x_enc /= stdev
# embedding
enc_out = self.enc_embedding(x_enc, x_mark_enc) # [B,T,C]
enc_out = self.predict_linear_pre(enc_out.permute(0, 2, 1)).permute(
0, 2, 1) # align temporal dimension
enc_out = torch.nn.functional.pad(enc_out, (0, 768 - enc_out.shape[-1]))
# enc_out = rearrange(enc_out, 'b l m -> b m l')
# enc_out = self.padding_patch_layer(enc_out)
# enc_out = enc_out.unfold(dimension=-1, size=self.patch_size, step=self.stride)
# enc_out = self.predict_linear(enc_out)
# enc_out = rearrange(enc_out, 'b m n p -> b n (m p)')
dec_out = self.gpt2(inputs_embeds=enc_out).last_hidden_state
dec_out = dec_out[:, :, :self.d_ff]
# dec_out = dec_out.reshape(B, -1)
# dec_out = self.ln(dec_out)
dec_out = self.out_layer(dec_out)
# print(dec_out.shape)
# dec_out = dec_out.reshape(B, self.pred_len + self.seq_len, -1)
# De-Normalization from Non-stationary Transformer
dec_out = dec_out * \
(stdev[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
dec_out = dec_out + \
(means[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
return dec_out
def anomaly_detection(self, x_enc):
B, L, M = x_enc.shape
# Normalization from Non-stationary Transformer
seg_num = 25
x_enc = rearrange(x_enc, 'b (n s) m -> b n s m', s=seg_num)
means = x_enc.mean(2, keepdim=True).detach()
x_enc = x_enc - means
stdev = torch.sqrt(
torch.var(x_enc, dim=2, keepdim=True, unbiased=False) + 1e-5)
x_enc /= stdev
x_enc = rearrange(x_enc, 'b n s m -> b (n s) m')
# means = x_enc.mean(1, keepdim=True).detach()
# x_enc = x_enc - means
# stdev = torch.sqrt(
# torch.var(x_enc, dim=1, keepdim=True, unbiased=False) + 1e-5)
# x_enc /= stdev
# enc_out = self.enc_embedding(x_enc, None) # [B,T,C]
enc_out = torch.nn.functional.pad(x_enc, (0, 768 - x_enc.shape[-1]))
outputs = self.gpt2(inputs_embeds=enc_out).last_hidden_state
outputs = outputs[:, :, :self.d_ff]
# outputs = self.ln_proj(outputs)
dec_out = self.out_layer(outputs)
# De-Normalization from Non-stationary Transformer
dec_out = rearrange(dec_out, 'b (n s) m -> b n s m', s=seg_num)
dec_out = dec_out * \
(stdev[:, :, 0, :].unsqueeze(2).repeat(
1, 1, seg_num, 1))
dec_out = dec_out + \
(means[:, :, 0, :].unsqueeze(2).repeat(
1, 1, seg_num, 1))
dec_out = rearrange(dec_out, 'b n s m -> b (n s) m')
# dec_out = dec_out * \
# (stdev[:, 0, :].unsqueeze(1).repeat(
# 1, self.pred_len + self.seq_len, 1))
# dec_out = dec_out + \
# (means[:, 0, :].unsqueeze(1).repeat(
# 1, self.pred_len + self.seq_len, 1))
return dec_out
def classification(self, x_enc, x_mark_enc):
# print(x_enc.shape)
B, L, M = x_enc.shape
input_x = rearrange(x_enc, 'b l m -> b m l')
input_x = self.padding_patch_layer(input_x)
input_x = input_x.unfold(dimension=-1, size=self.patch_size, step=self.stride)
input_x = rearrange(input_x, 'b m n p -> b n (p m)')
outputs = self.enc_embedding(input_x, None)
outputs = self.gpt2(inputs_embeds=outputs).last_hidden_state
outputs = self.act(outputs).reshape(B, -1)
outputs = self.ln_proj(outputs)
# outputs = self.dropout(outputs)
outputs = self.out_layer(outputs)
return outputs
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/models/TimesNet.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.fft
import math
class Inception_Block_V1(nn.Module):
def __init__(self, in_channels, out_channels, num_kernels=6, init_weight=True):
super(Inception_Block_V1, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.num_kernels = num_kernels
kernels = []
for i in range(self.num_kernels):
kernels.append(nn.Conv2d(in_channels, out_channels, kernel_size=2 * i + 1, padding=i))
self.kernels = nn.ModuleList(kernels)
if init_weight:
self._initialize_weights()
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
if m.bias is not None:
nn.init.constant_(m.bias, 0)
def forward(self, x):
res_list = []
for i in range(self.num_kernels):
res_list.append(self.kernels[i](x))
res = torch.stack(res_list, dim=-1).mean(-1)
return res
class PositionalEmbedding(nn.Module):
def __init__(self, d_model, max_len=5000):
super(PositionalEmbedding, self).__init__()
# Compute the positional encodings once in log space.
pe = torch.zeros(max_len, d_model).float()
pe.require_grad = False
position = torch.arange(0, max_len).float().unsqueeze(1)
div_term = (torch.arange(0, d_model, 2).float()
* -(math.log(10000.0) / d_model)).exp()
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)
def forward(self, x):
return self.pe[:, :x.size(1)]
class TokenEmbedding(nn.Module):
def __init__(self, c_in, d_model):
super(TokenEmbedding, self).__init__()
padding = 1 if torch.__version__ >= '1.5.0' else 2
self.tokenConv = nn.Conv1d(in_channels=c_in, out_channels=d_model,
kernel_size=3, padding=padding, padding_mode='circular', bias=False)
for m in self.modules():
if isinstance(m, nn.Conv1d):
nn.init.kaiming_normal_(
m.weight, mode='fan_in', nonlinearity='leaky_relu')
def forward(self, x):
x = self.tokenConv(x.permute(0, 2, 1)).transpose(1, 2)
return x
class FixedEmbedding(nn.Module):
def __init__(self, c_in, d_model):
super(FixedEmbedding, self).__init__()
w = torch.zeros(c_in, d_model).float()
w.require_grad = False
position = torch.arange(0, c_in).float().unsqueeze(1)
div_term = (torch.arange(0, d_model, 2).float()
* -(math.log(10000.0) / d_model)).exp()
w[:, 0::2] = torch.sin(position * div_term)
w[:, 1::2] = torch.cos(position * div_term)
self.emb = nn.Embedding(c_in, d_model)
self.emb.weight = nn.Parameter(w, requires_grad=False)
def forward(self, x):
return self.emb(x).detach()
class TemporalEmbedding(nn.Module):
def __init__(self, d_model, embed_type='fixed', freq='h'):
super(TemporalEmbedding, self).__init__()
minute_size = 4
hour_size = 24
weekday_size = 7
day_size = 32
month_size = 13
Embed = FixedEmbedding if embed_type == 'fixed' else nn.Embedding
if freq == 't':
self.minute_embed = Embed(minute_size, d_model)
self.hour_embed = Embed(hour_size, d_model)
self.weekday_embed = Embed(weekday_size, d_model)
self.day_embed = Embed(day_size, d_model)
self.month_embed = Embed(month_size, d_model)
def forward(self, x):
x = x.long()
minute_x = self.minute_embed(x[:, :, 4]) if hasattr(
self, 'minute_embed') else 0.
hour_x = self.hour_embed(x[:, :, 3])
weekday_x = self.weekday_embed(x[:, :, 2])
day_x = self.day_embed(x[:, :, 1])
month_x = self.month_embed(x[:, :, 0])
return hour_x + weekday_x + day_x + month_x + minute_x
class TimeFeatureEmbedding(nn.Module):
def __init__(self, d_model, embed_type='timeF', freq='h'):
super(TimeFeatureEmbedding, self).__init__()
freq_map = {'h': 4, 't': 5, 's': 6,
'm': 1, 'a': 1, 'w': 2, 'd': 3, 'b': 3}
d_inp = freq_map[freq]
self.embed = nn.Linear(d_inp, d_model, bias=False)
def forward(self, x):
return self.embed(x)
class DataEmbedding(nn.Module):
def __init__(self, c_in, d_model, embed_type='fixed', freq='h', dropout=0.1):
super(DataEmbedding, self).__init__()
self.value_embedding = TokenEmbedding(c_in=c_in, d_model=d_model)
self.position_embedding = PositionalEmbedding(d_model=d_model)
self.temporal_embedding = TemporalEmbedding(d_model=d_model, embed_type=embed_type,
freq=freq) if embed_type != 'timeF' else TimeFeatureEmbedding(
d_model=d_model, embed_type=embed_type, freq=freq)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x, x_mark):
if x_mark is None:
x = self.value_embedding(x) + self.position_embedding(x)
else:
x = self.value_embedding(
x) + self.temporal_embedding(x_mark) + self.position_embedding(x)
return self.dropout(x)
def FFT_for_Period(x, k=2):
# [B, T, C]
xf = torch.fft.rfft(x, dim=1)
# find period by amplitudes
frequency_list = abs(xf).mean(0).mean(-1)
frequency_list[0] = 0
_, top_list = torch.topk(frequency_list, k)
top_list = top_list.detach().cpu().numpy()
period = x.shape[1] // top_list
return period, abs(xf).mean(-1)[:, top_list]
class TimesBlock(nn.Module):
def __init__(self, configs):
super(TimesBlock, self).__init__()
self.seq_len = configs.seq_len
self.pred_len = configs.pred_len
self.k = configs.top_k
# parameter-efficient design
self.conv = nn.Sequential(
Inception_Block_V1(configs.d_model, configs.d_ff,
num_kernels=configs.num_kernels),
nn.GELU(),
Inception_Block_V1(configs.d_ff, configs.d_model,
num_kernels=configs.num_kernels)
)
def forward(self, x):
B, T, N = x.size()
period_list, period_weight = FFT_for_Period(x, self.k)
res = []
for i in range(self.k):
period = period_list[i]
# padding
if (self.seq_len + self.pred_len) % period != 0:
length = (
((self.seq_len + self.pred_len) // period) + 1) * period
padding = torch.zeros([x.shape[0], (length - (self.seq_len + self.pred_len)), x.shape[2]]).to(x.device)
out = torch.cat([x, padding], dim=1)
else:
length = (self.seq_len + self.pred_len)
out = x
# reshape
out = out.reshape(B, length // period, period,
N).permute(0, 3, 1, 2).contiguous()
# 2D conv: from 1d Variation to 2d Variation
out = self.conv(out)
# reshape back
out = out.permute(0, 2, 3, 1).reshape(B, -1, N)
res.append(out[:, :(self.seq_len + self.pred_len), :])
res = torch.stack(res, dim=-1)
# adaptive aggregation
period_weight = F.softmax(period_weight, dim=1)
period_weight = period_weight.unsqueeze(
1).unsqueeze(1).repeat(1, T, N, 1)
res = torch.sum(res * period_weight, -1)
# residual connection
res = res + x
return res
class Model(nn.Module):
"""
Paper link: https://openreview.net/pdf?id=ju_Uqw384Oq
"""
def __init__(self, configs):
super(Model, self).__init__()
self.configs = configs
self.task_name = configs.task_name
self.seq_len = configs.seq_len
self.label_len = configs.label_len
self.pred_len = configs.pred_len
self.model = nn.ModuleList([TimesBlock(configs)
for _ in range(configs.e_layers)])
self.enc_embedding = DataEmbedding(configs.enc_in, configs.d_model, configs.embed, configs.freq,
configs.dropout)
self.layer = configs.e_layers
self.layer_norm = nn.LayerNorm(configs.d_model)
if self.task_name == 'long_term_forecast' or self.task_name == 'short_term_forecast':
self.predict_linear = nn.Linear(
self.seq_len, self.pred_len + self.seq_len)
self.projection = nn.Linear(
configs.d_model, configs.c_out, bias=True)
if self.task_name == 'imputation' or self.task_name == 'anomaly_detection':
self.projection = nn.Linear(
configs.d_model, configs.c_out, bias=True)
if self.task_name == 'classification':
self.act = F.gelu
self.dropout = nn.Dropout(configs.dropout)
self.projection = nn.Linear(
configs.d_model * configs.seq_len, configs.num_class)
def forecast(self, x_enc, x_mark_enc, x_dec, x_mark_dec):
# Normalization from Non-stationary Transformer
means = x_enc.mean(1, keepdim=True).detach()
x_enc = x_enc - means
stdev = torch.sqrt(
torch.var(x_enc, dim=1, keepdim=True, unbiased=False) + 1e-5)
x_enc /= stdev
# embedding
enc_out = self.enc_embedding(x_enc, x_mark_enc) # [B,T,C]
enc_out = self.predict_linear(enc_out.permute(0, 2, 1)).permute(
0, 2, 1) # align temporal dimension
# TimesNet
for i in range(self.layer):
enc_out = self.layer_norm(self.model[i](enc_out))
# porject back
dec_out = self.projection(enc_out)
# De-Normalization from Non-stationary Transformer
dec_out = dec_out * \
(stdev[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
dec_out = dec_out + \
(means[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
return dec_out
def imputation(self, x_enc, x_mark_enc, x_dec, x_mark_dec, mask):
# Normalization from Non-stationary Transformer
means = torch.sum(x_enc, dim=1) / torch.sum(mask == 1, dim=1)
means = means.unsqueeze(1).detach()
x_enc = x_enc - means
x_enc = x_enc.masked_fill(mask == 0, 0)
stdev = torch.sqrt(torch.sum(x_enc * x_enc, dim=1) /
torch.sum(mask == 1, dim=1) + 1e-5)
stdev = stdev.unsqueeze(1).detach()
x_enc /= stdev
# embedding
enc_out = self.enc_embedding(x_enc, x_mark_enc) # [B,T,C]
# TimesNet
for i in range(self.layer):
enc_out = self.layer_norm(self.model[i](enc_out))
# porject back
dec_out = self.projection(enc_out)
# De-Normalization from Non-stationary Transformer
dec_out = dec_out * \
(stdev[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
dec_out = dec_out + \
(means[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
return dec_out
def anomaly_detection(self, x_enc):
# Normalization from Non-stationary Transformer
means = x_enc.mean(1, keepdim=True).detach()
x_enc = x_enc - means
stdev = torch.sqrt(
torch.var(x_enc, dim=1, keepdim=True, unbiased=False) + 1e-5)
x_enc /= stdev
# embedding
enc_out = self.enc_embedding(x_enc, None) # [B,T,C]
# TimesNet
for i in range(self.layer):
enc_out = self.layer_norm(self.model[i](enc_out))
# porject back
dec_out = self.projection(enc_out)
# De-Normalization from Non-stationary Transformer
dec_out = dec_out * \
(stdev[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
dec_out = dec_out + \
(means[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
return dec_out
def classification(self, x_enc, x_mark_enc):
# embedding
enc_out = self.enc_embedding(x_enc, None) # [B,T,C]
# TimesNet
for i in range(self.layer):
enc_out = self.layer_norm(self.model[i](enc_out))
# Output
# the output transformer encoder/decoder embeddings don't include non-linearity
output = self.act(enc_out)
output = self.dropout(output)
# zero-out padding embeddings
output = output * x_mark_enc.unsqueeze(-1)
# (batch_size, seq_length * d_model)
output = output.reshape(output.shape[0], -1)
output = self.projection(output) # (batch_size, num_classes)
return output
def forward(self, x_enc, x_mark_enc, x_dec, x_mark_dec, mask=None):
if self.task_name == 'long_term_forecast' or self.task_name == 'short_term_forecast':
dec_out = self.forecast(x_enc, x_mark_enc, x_dec, x_mark_dec)
return dec_out[:, -self.pred_len:, :] # [B, L, D]
if self.task_name == 'imputation':
dec_out = self.imputation(
x_enc, x_mark_enc, x_dec, x_mark_dec, mask)
return dec_out # [B, L, D]
if self.task_name == 'anomaly_detection':
dec_out = self.anomaly_detection(x_enc)
return dec_out # [B, L, D]
if self.task_name == 'classification':
dec_out = self.classification(x_enc, x_mark_enc)
return dec_out # [B, N]
return None
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/models/__init__.py
================================================
from .encoder import TSEncoder
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/models/dilated_conv.py
================================================
import torch
from torch import nn
import torch.nn.functional as F
import numpy as np
class SamePadConv(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, dilation=1, groups=1):
super().__init__()
self.receptive_field = (kernel_size - 1) * dilation + 1
padding = self.receptive_field // 2
self.conv = nn.Conv1d(
in_channels, out_channels, kernel_size,
padding=padding,
dilation=dilation,
groups=groups
)
self.remove = 1 if self.receptive_field % 2 == 0 else 0
def forward(self, x):
out = self.conv(x)
if self.remove > 0:
out = out[:, :, : -self.remove]
return out
class ConvBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, dilation, final=False):
super().__init__()
self.conv1 = SamePadConv(in_channels, out_channels, kernel_size, dilation=dilation)
self.conv2 = SamePadConv(out_channels, out_channels, kernel_size, dilation=dilation)
self.projector = nn.Conv1d(in_channels, out_channels, 1) if in_channels != out_channels or final else None
def forward(self, x):
residual = x if self.projector is None else self.projector(x)
x = F.gelu(x)
x = self.conv1(x)
x = F.gelu(x)
x = self.conv2(x)
return x + residual
class DilatedConvEncoder(nn.Module):
def __init__(self, in_channels, channels, kernel_size):
super().__init__()
self.net = nn.Sequential(*[
ConvBlock(
channels[i-1] if i > 0 else in_channels,
channels[i],
kernel_size=kernel_size,
dilation=2**i,
final=(i == len(channels)-1)
)
for i in range(len(channels))
])
def forward(self, x):
return self.net(x)
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/models/donut_model.py
================================================
import torch
from torch import nn
from torch.nn import functional as F
class VariationalNet(nn.Module):
'''
Encodes the input by passing through the encoder network and returns the latent representations.
'''
def __init__(self, in_channel, latent_dim=100, hidden_dim=3):
super(VariationalNet, self).__init__()
self.latent_dim = latent_dim
self.hidden_dim = hidden_dim
self.encoder = nn.Sequential(
nn.Linear(in_channel, latent_dim),
nn.ReLU(),
nn.Linear(latent_dim, latent_dim),
nn.ReLU()
)
self.fc_mu = nn.Linear(latent_dim, hidden_dim)
self.fc_var = nn.Sequential(
nn.Linear(latent_dim, hidden_dim),
nn.Softplus()
)
def forward(self, inputs):
'''
Args:
inputs: [batch_size, max_length, in_channel]
Returns:
z_mu: [batch_size, max_length, hidden_dim]
z_log_var: [batch_size, max_length, hidden_dim]
'''
hidden_res = self.encoder(inputs) # [batch_size, max_length, latent_dim]
z_mu = self.fc_mu(hidden_res) # [batch_size, max_length, hidden_dim]
z_log_var = self.fc_var(hidden_res) + 1e-4 # [batch_size, max_length, hidden_dim]
return z_mu, z_log_var
class GenerativeNet(nn.Module):
'''
Maps the given latent representations through the decoder network onto the inputs space.
'''
def __init__(self, in_channel, latent_dim=100, hidden_dim=3):
super(GenerativeNet, self).__init__()
self.latent_dim = latent_dim
self.hidden_dim = hidden_dim
self.decoder = nn.Sequential(
nn.Linear(hidden_dim, latent_dim),
nn.ReLU(),
nn.Linear(latent_dim, latent_dim),
nn.ReLU()
)
self.fc_mu = nn.Linear(latent_dim, in_channel)
self.fc_var = nn.Sequential(
nn.Linear(latent_dim, in_channel),
nn.Softplus()
)
def forward(self, z):
'''
Args:
z: [batch_size, max_length, hidden_dim]
Returns:
x_mu: [batch_size, max_length, in_channel]
x_log_var: [batch_size, max_length, in_channel]
'''
hidden_res = self.decoder(z) # [batch_size, max_length, latent_dim]
x_mu = self.fc_mu(hidden_res) # [batch_size, max_length, in_channel]
x_log_var = self.fc_var(hidden_res) + 1e-4 # [batch_size, max_length, in_channel]
return x_mu, x_log_var
class DONUT_Model(nn.Module):
def __init__(self, in_channel, latent_dim=100, hidden_dim=3):
super(DONUT_Model, self).__init__()
self.in_channel = in_channel
self.latent_dim = latent_dim
self.hidden_dim = hidden_dim
self.Encoder = VariationalNet(self.in_channel, self.latent_dim, self.hidden_dim)
self.Decoder = GenerativeNet(self.in_channel, self.latent_dim, self.hidden_dim)
def reparameterize(self, mu, logvar):
"""
Reparameterization trick to sample from N(mu, var) from
N(0,1).
:param mu: (Tensor) Mean of the latent Gaussian [batch_size, max_length, hidden_dim]
:param logvar: (Tensor) Standard deviation of the latent Gaussian [batch_size, max_length, hidden_dim]
:return: (Tensor) [batch_size, max_length, hidden_dim]
"""
std = torch.exp(0.5 * logvar)
eps = torch.randn_like(std)
return eps * std + mu
def forward(self, inputs):
'''
Args:
inputs: [batch_size, max_length, in_channel]
Returns:
outputs: [batch_size, max_length, in_channel]
z_mu, z_log_var: [batch_size, max_length, hidden_dim]
x_mu, x_log_var: [batch_size, max_length, in_channel]
'''
z_mu, z_log_var = self.Encoder(inputs)
z = self.reparameterize(z_mu, z_log_var) # [batch_size, max_length, hidden_dim]
x_mu, x_log_var = self.Decoder(z)
outputs = self.reparameterize(x_mu, x_log_var) # [batch_size, max_length, in_channel]
return outputs, z_mu, z_log_var, x_mu, x_log_var
def loss_function(self, inputs, outputs, z_mu, z_log_var, x_mu, x_log_var, z_kld_weight, x_kld_weight):
"""
Computes the VAE loss function.
KL(N(/mu, /sigma), N(0, 1)) = /log /frac{1}{/sigma} + /frac{/sigma^2 + /mu^2}{2} - /frac{1}{2}
Args:
inputs, outputs: [batch_size, max_length, in_channel]
z_mu, z_log_var: [batch_size, max_length, hidden_dim]
x_mu, x_log_var: [batch_size, max_length, in_channel]
z_kld_weight, x_kld_weight: float Value
"""
recons_loss = F.mse_loss(outputs, inputs)
_, _, hidden_dim = z_mu.size()
z_mu = z_mu.reshape(-1, hidden_dim)
z_log_var = z_log_var.reshape(-1, hidden_dim)
z_kld_loss = torch.mean(-0.5 * torch.sum(1 + z_log_var - z_mu ** 2 - z_log_var.exp(), dim = 1), dim = 0)
_, _, in_channel = x_mu.size()
x_mu = x_mu.reshape(-1, in_channel)
x_log_var = x_log_var.reshape(-1, in_channel)
x_kld_loss = torch.mean(-0.5 * torch.sum(1 + x_log_var - x_mu ** 2 - x_log_var.exp(), dim = 1), dim = 0)
loss = recons_loss + z_kld_weight * z_kld_loss + x_kld_weight * x_kld_loss
return loss
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/models/encoder.py
================================================
import torch
from torch import nn
import torch.nn.functional as F
import numpy as np
from .dilated_conv import DilatedConvEncoder
def generate_continuous_mask(B, T, n=5, l=0.1):
res = torch.full((B, T), True, dtype=torch.bool)
if isinstance(n, float):
n = int(n * T)
n = max(min(n, T // 2), 1)
if isinstance(l, float):
l = int(l * T)
l = max(l, 1)
for i in range(B):
for _ in range(n):
t = np.random.randint(T-l+1)
res[i, t:t+l] = False
return res
def generate_binomial_mask(B, T, p=0.5):
return torch.from_numpy(np.random.binomial(1, p, size=(B, T))).to(torch.bool)
class TSEncoder(nn.Module):
def __init__(self, input_dims, output_dims, hidden_dims=64, depth=10, mask_mode='binomial'):
super().__init__()
self.input_dims = input_dims
self.output_dims = output_dims
self.hidden_dims = hidden_dims
self.mask_mode = mask_mode
self.input_fc = nn.Linear(input_dims, hidden_dims)
self.feature_extractor = DilatedConvEncoder(
hidden_dims,
[hidden_dims] * depth + [output_dims],
kernel_size=3
)
self.repr_dropout = nn.Dropout(p=0.1)
def forward(self, x, mask=None): # x: B x T x input_dims
nan_mask = ~x.isnan().any(axis=-1)
x[~nan_mask] = 0
x = self.input_fc(x) # B x T x Ch
# generate & apply mask
if mask is None:
if self.training:
mask = self.mask_mode
else:
mask = 'all_true'
if mask == 'binomial':
mask = generate_binomial_mask(x.size(0), x.size(1)).to(x.device)
elif mask == 'continuous':
mask = generate_continuous_mask(x.size(0), x.size(1)).to(x.device)
elif mask == 'all_true':
mask = x.new_full((x.size(0), x.size(1)), True, dtype=torch.bool)
elif mask == 'all_false':
mask = x.new_full((x.size(0), x.size(1)), False, dtype=torch.bool)
elif mask == 'mask_last':
mask = x.new_full((x.size(0), x.size(1)), True, dtype=torch.bool)
mask[:, -1] = False
mask &= nan_mask
x[~mask] = 0
# conv encoder
x = x.transpose(1, 2) # B x Ch x T
x = self.repr_dropout(self.feature_extractor(x)) # B x Co x T
x = x.transpose(1, 2) # B x T x Co
return x
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/models/losses.py
================================================
import torch
from torch import nn
import torch.nn.functional as F
def hierarchical_contrastive_loss(z1, z2, alpha=0.5, temporal_unit=0):
loss = torch.tensor(0., device=z1.device)
d = 0
while z1.size(1) > 1:
if alpha != 0:
loss += alpha * instance_contrastive_loss(z1, z2)
if d >= temporal_unit:
if 1 - alpha != 0:
loss += (1 - alpha) * temporal_contrastive_loss(z1, z2)
d += 1
z1 = F.max_pool1d(z1.transpose(1, 2), kernel_size=2).transpose(1, 2)
z2 = F.max_pool1d(z2.transpose(1, 2), kernel_size=2).transpose(1, 2)
if z1.size(1) == 1:
if alpha != 0:
loss += alpha * instance_contrastive_loss(z1, z2)
d += 1
return loss / d
def instance_contrastive_loss(z1, z2):
B, T = z1.size(0), z1.size(1)
if B == 1:
return z1.new_tensor(0.)
z = torch.cat([z1, z2], dim=0) # 2B x T x C
z = z.transpose(0, 1) # T x 2B x C
sim = torch.matmul(z, z.transpose(1, 2)) # T x 2B x 2B
logits = torch.tril(sim, diagonal=-1)[:, :, :-1] # T x 2B x (2B-1)
logits += torch.triu(sim, diagonal=1)[:, :, 1:]
logits = -F.log_softmax(logits, dim=-1)
i = torch.arange(B, device=z1.device)
loss = (logits[:, i, B + i - 1].mean() + logits[:, B + i, i].mean()) / 2
return loss
def temporal_contrastive_loss(z1, z2):
B, T = z1.size(0), z1.size(1)
if T == 1:
return z1.new_tensor(0.)
z = torch.cat([z1, z2], dim=1) # B x 2T x C
sim = torch.matmul(z, z.transpose(1, 2)) # B x 2T x 2T
logits = torch.tril(sim, diagonal=-1)[:, :, :-1] # B x 2T x (2T-1)
logits += torch.triu(sim, diagonal=1)[:, :, 1:]
logits = -F.log_softmax(logits, dim=-1)
t = torch.arange(T, device=z1.device)
loss = (logits[:, t, T + t - 1].mean() + logits[:, T + t, t].mean()) / 2
return loss
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/models/lstm_vae_model.py
================================================
import torch
from torch import nn
from torch.nn import functional as F
class LSTM_Encoder(nn.Module):
'''
Encodes the input by passing through the encoder network and returns the latent representations.
'''
def __init__(self, device, in_channel, hidden_size=16, hidden_dim=3):
super(LSTM_Encoder, self).__init__()
self.device = device
self.hidden_size = hidden_size
self.hidden_dim = hidden_dim
self.encoder = nn.LSTM(input_size=in_channel, hidden_size=hidden_size, batch_first=True, bidirectional=True)
self.fc_mu = nn.Linear(2*hidden_size, hidden_dim)
self.fc_var = nn.Sequential(
nn.Linear(2*hidden_size, hidden_dim),
nn.Softplus()
)
def forward(self, inputs):
'''
Args:
inputs: [batch_size, max_length, in_channel]
Returns:
z_mu: [batch_size, max_length, hidden_dim]
z_log_var: [batch_size, max_length, hidden_dim]
'''
batch_size, _, _ = inputs.size()
h_0 = torch.zeros((2, batch_size, self.hidden_size), requires_grad=True).to(self.device)
c_0 = torch.zeros((2, batch_size, self.hidden_size), requires_grad=True).to(self.device)
# hidden_res: [batch_size, max_length, 2*hidden_size]
hidden_res, (h_n, c_n) = self.encoder(inputs, (h_0, c_0))
z_mu = self.fc_mu(hidden_res) # [batch_size, max_length, hidden_dim]
z_log_var = self.fc_var(hidden_res) + 1e-4 # [batch_size, max_length, hidden_dim]
return z_mu, z_log_var
class LSTM_Decoder(nn.Module):
'''
Maps the given latent representations through the decoder network onto the inputs space.
'''
def __init__(self, device, in_channel, hidden_size=16, hidden_dim=3):
super(LSTM_Decoder, self).__init__()
self.device = device
self.hidden_size = hidden_size
self.hidden_dim = hidden_dim
self.decoder = nn.LSTM(input_size=hidden_dim, hidden_size=hidden_size, batch_first=True, bidirectional=True)
self.fc_mu = nn.Linear(2*hidden_size, in_channel)
self.fc_var = nn.Sequential(
nn.Linear(2*hidden_size, in_channel),
nn.Softplus()
)
def forward(self, z):
'''
Args:
z: [batch_size, max_length, hidden_dim]
Returns:
x_mu: [batch_size, max_length, in_channel]
x_log_var: [batch_size, max_length, in_channel]
'''
batch_size, _, _ = z.size()
h_0 = torch.zeros((2, batch_size, self.hidden_size), requires_grad=True).to(self.device)
c_0 = torch.zeros((2, batch_size, self.hidden_size), requires_grad=True).to(self.device)
# hidden_res: [batch_size, max_length, 2*hidden_size]
hidden_res, (h_n, c_n) = self.decoder(z, (h_0, c_0))
x_mu = self.fc_mu(hidden_res) # [batch_size, max_length, in_channel]
x_log_var = self.fc_var(hidden_res) + 1e-4 # [batch_size, max_length, in_channel]
return x_mu, x_log_var
class LSTM_VAE_Model(nn.Module):
def __init__(self, device, in_channel, hidden_size=16, hidden_dim=3):
super(LSTM_VAE_Model, self).__init__()
self.device = device
self.in_channel = in_channel
self.hidden_size = hidden_size
self.hidden_dim = hidden_dim
self.Encoder = LSTM_Encoder(self.device, self.in_channel, self.hidden_size, self.hidden_dim)
self.Decoder = LSTM_Decoder(self.device, self.in_channel, self.hidden_size, self.hidden_dim)
def reparameterize(self, mu, logvar):
"""
Reparameterization trick to sample from N(mu, var) from
N(0,1).
:param mu: (Tensor) Mean of the latent Gaussian [batch_size, max_length, hidden_dim]
:param logvar: (Tensor) Standard deviation of the latent Gaussian [batch_size, max_length, hidden_dim]
:return: (Tensor) [batch_size, max_length, hidden_dim]
"""
std = torch.exp(0.5 * logvar)
eps = torch.randn_like(std)
return eps * std + mu
def forward(self, inputs):
'''
Args:
inputs: [batch_size, max_length, in_channel]
Returns:
outputs: [batch_size, max_length, in_channel]
z_mu, z_log_var: [batch_size, max_length, hidden_dim]
x_mu, x_log_var: [batch_size, max_length, in_channel]
'''
z_mu, z_log_var = self.Encoder(inputs)
z = self.reparameterize(z_mu, z_log_var) # [batch_size, max_length, hidden_dim]
x_mu, x_log_var = self.Decoder(z)
outputs = self.reparameterize(x_mu, x_log_var) # [batch_size, max_length, in_channel]
return outputs, z_mu, z_log_var, x_mu, x_log_var
def loss_function(self, inputs, outputs, z_mu, z_log_var, x_mu, x_log_var, z_kld_weight, x_kld_weight):
"""
Computes the VAE loss function.
KL(N(/mu, /sigma), N(0, 1)) = /log /frac{1}{/sigma} + /frac{/sigma^2 + /mu^2}{2} - /frac{1}{2}
Args:
inputs, outputs: [batch_size, max_length, in_channel]
z_mu, z_log_var: [batch_size, max_length, hidden_dim]
x_mu, x_log_var: [batch_size, max_length, in_channel]
z_kld_weight, x_kld_weight: float Value
"""
recons_loss = F.mse_loss(outputs, inputs)
_, _, hidden_dim = z_mu.size()
z_mu = z_mu.reshape(-1, hidden_dim)
z_log_var = z_log_var.reshape(-1, hidden_dim)
z_kld_loss = torch.mean(-0.5 * torch.sum(1 + z_log_var - z_mu ** 2 - z_log_var.exp(), dim = 1), dim = 0)
_, _, in_channel = x_mu.size()
x_mu = x_mu.reshape(-1, in_channel)
x_log_var = x_log_var.reshape(-1, in_channel)
x_kld_loss = torch.mean(-0.5 * torch.sum(1 + x_log_var - x_mu ** 2 - x_log_var.exp(), dim = 1), dim = 0)
loss = recons_loss + z_kld_weight * z_kld_loss + x_kld_weight * x_kld_loss
return loss
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/new_dataset_read_test.py
================================================
from datasets.data_loader import get_loader_segment
index = 143
datapath = './datasets/'
dataset_name = 'MSL' ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR
data_path = datapath + dataset_name + '/'
batch_size = 128
data_loader = get_loader_segment(index, data_path, batch_size, win_size=100, step=100, mode='train', dataset=dataset_name)
data_loader = get_loader_segment(index, data_path, batch_size, win_size=100, step=100, mode='val', dataset=dataset_name)
data_loader = get_loader_segment(index, data_path, batch_size, win_size=100, step=100, mode='test', dataset=dataset_name)
print("Read Success!!!")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/scripts/at_zeta0.sh
================================================
python train_lstm_vae_multi.py --dataset PSM --save_csv_name train_lstm_vae_multi_0717.csv --gpu 0;
python train_donut_multi.py --dataset PSM --save_csv_name train_donut_multi_0717.csv --gpu 0;
python train_lstm_vae_multi.py --dataset SWAT --save_csv_name train_lstm_vae_multi_0717.csv --gpu 0;
python train_donut_multi.py --dataset SWAT --save_csv_name train_donut_multi_0717.csv --gpu 0;
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/scripts/at_zeta1.sh
================================================
python train_at_multi.py --anormly_ratio 0.5 --dataset SMD --save_csv_name train_at_multi_0719.csv --cuda cuda:0;
python train_at_multi.py --anormly_ratio 1 --dataset MSL --save_csv_name train_at_multi_0719.csv --cuda cuda:0;
python train_at_multi.py --anormly_ratio 0.85 --dataset SMAP --save_csv_name train_at_multi_0719.csv --cuda cuda:0;
python train_at_multi.py --anormly_ratio 1 --dataset PSM --save_csv_name train_at_multi_0719.csv --cuda cuda:0;
python train_at_multi.py --anormly_ratio 1 --dataset SWAT --save_csv_name train_at_multi_0719.csv --cuda cuda:0;
python train_at_multi.py --anormly_ratio 0.9 --dataset NIPS_TS_Swan --save_csv_name train_at_multi_0719.csv --cuda cuda:0;
python train_at_multi.py --anormly_ratio 1 --dataset NIPS_TS_Water --save_csv_name train_at_multi_0719.csv --cuda cuda:0;
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/scripts/generator_sh.py
================================================
uni_datasets = ['kpi', 'yahoo']
multi_datasets = ['SMD', 'MSL', 'SMAP', 'PSM', 'SWAT', 'NIPS_TS_Swan', 'NIPS_TS_Water'] ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water , 'UCR'
# code_main = 'main_gpt4ts_uea' ## main_patchtst_ucr main_gpt4ts_ucr mian_patchtst
code_main_list = ['train_spot', 'train_dspot', 'train_lstm_vae', 'train_donut', 'train_ts2vec']
# for dataset in uni_datasets:
# i = 1
# for code_main in code_main_list:
# print("i = ", i, "dataset_name = ", dataset)
# i = i + 1
#
# save_csv_name = code_main + '_0717.csv' ## --len_k
#
# with open('/dev_data/lz/tsm_ptms_anomaly_detection/other_anomaly_baselines/scripts/uni_at.sh', 'a') as f:
# f.write('python ' + code_main + '.py '
# '--dataset ' + dataset
# +
# ' --save_csv_name ' + save_csv_name + ' --gpu 0' + ';\n')
# for _index in range(1,251):
# i = 1
# for code_main in code_main_list:
# print("i = ", i, "dataset_name = UCR")
# i = i + 1
#
# save_csv_name = code_main + '_ucr_0715.csv' ## --len_k
#
# with open('/dev_data/lz/tsm_ptms_anomaly_detection/other_anomaly_baselines/scripts/ucr_at.sh', 'a') as f:
# f.write('python ' + code_main + '_multi.py '
# '--dataset UCR --index ' + str(_index)
# +
# ' --save_csv_name ' + save_csv_name + ' --gpu 0' + ';\n')
# code_main_list = ['train_lstm_vae_multi', 'train_donut_multi', 'train_ts2vec_multi', 'train_dcdetector']
# for dataset in multi_datasets:
# i = 1
# for code_main in code_main_list:
# print("i = ", i, "dataset_name = ", dataset)
# i = i + 1
#
# save_csv_name = code_main + '_0717.csv' ## --len_k
#
# with open('/dev_data/lz/tsm_ptms_anomaly_detection/other_anomaly_baselines/scripts/multi_at.sh', 'a') as f:
# f.write('python ' + code_main + '.py '
# '--dataset ' + dataset
# +
# ' --save_csv_name ' + save_csv_name + ' --gpu 0' + ';\n')
# code_main_list = ['train_timesnet', 'train_gpt4ts']
# for dataset in multi_datasets:
# i = 1
# for code_main in code_main_list:
# print("i = ", i, "dataset_name = ", dataset)
# i = i + 1
#
# save_csv_name = code_main + '_0717.csv' ## --len_k
#
# with open('/dev_data/lz/tsm_ptms_anomaly_detection/other_anomaly_baselines/scripts/multi_at.sh', 'a') as f:
# f.write('python ' + code_main + '.py '
# '--data ' + dataset
# +
# ' --save_csv_name ' + save_csv_name + ' --gpu 0' + ';\n')
#
# code_main_list = ['train_at_multi'] ## , 'train_gpt4ts' train_timesnet train_dcdetector train_at_multi
#
# for _index in range(1,251):
# i = 1
# for code_main in code_main_list:
# print("i = ", i, "dataset_name = UCR")
# i = i + 1
#
# save_csv_name = code_main + '_ucr_0719.csv' ## --len_k
#
# with open('/dev_data/lz/tsm_ptms_anomaly_detection/other_anomaly_baselines/scripts/ucr_at_zeta0.sh', 'a') as f:
# f.write('python ' + code_main + '.py '
# '--anormly_ratio 0.5 --dataset UCR --index ' + str(_index)
# +
# ' --save_csv_name ' + save_csv_name + ' --cuda cuda:0' + ';\n') ## anomaly_ratio anormly_ratio anormly_ratio
# code_main_list = ['train_dcdetector_nui'] ## , 'train_gpt4ts' train_timesnet train_dcdetector train_at_multi
# ## train_gpt4ts_uni train_timesnet_uni
# for dataset in uni_datasets:
# i = 1
# for code_main in code_main_list:
# print("i = ", i, "dataset_name = UCR")
# i = i + 1
#
# save_csv_name = code_main + '_hm_0720.csv' ## --len_k
#
# with open('/SSD/lz/tsm_ptms_anomaly_detection/other_anomaly_baselines/scripts/ucr_at.sh', 'a') as f:
# f.write('python ' + code_main + '.py '
# '--anormly_ratio 1 --dataset ' + dataset
# +
# ' --save_csv_name ' + save_csv_name + ' --gpu 0' + ';\n') ## anomaly_ratio anormly_ratio anormly_ratio
code_main_list = ['train_gpt4ts'] ## , 'train_gpt4ts' train_timesnet train_dcdetector train_at_multi
## train_gpt4ts_uni train_timesnet_uni
uni_datasets = [79, 108, 187, 203]
for dataset in uni_datasets:
i = 1
for code_main in code_main_list:
print("i = ", i, "dataset_name = UCR")
i = i + 1
# save_csv_name = code_main + '_hm_0720.csv' ## --len_k
with open('/dev_data/lz/tsm_ptms_anomaly_detection/other_anomaly_baselines/scripts/ucr_at.sh', 'a') as f:
f.write('python ' + code_main + '.py '
'--index ' + str(dataset)
+ ';\n') ## anomaly_ratio anormly_ratio anormly_ratio
### --cuda cuda:0
## nohup ./scripts/uni_at.sh &
## nohup ./scripts/multi_at.sh &
## nohup ./scripts/ucr_at.sh &
## nohup ./scripts/ucr_at_delta_0.sh &
## nohup ./scripts/ucr_at_delta_1.sh &
## nohup ./scripts/ucr_at_delta_1_2.sh &
## nohup ./scripts/ucr_at_zeta0.sh &
## nohup ./scripts/at_zeta1.sh &
## nohup ./scripts/at_zeta0.sh &
## nohup ./scripts/kpi.sh &
## nohup ./scripts/yahoo.sh &
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/scripts/kpi.sh
================================================
python train_at_uni.py --dataset kpi --batch_size 8 --save_csv_name train_at_uni_0720_.csv --cuda cuda:0;
python train_at_uni.py --dataset yahoo --batch_size 8 --save_csv_name train_at_uni_0720_.csv --cuda cuda:0;
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/scripts/multi_at.sh
================================================
python train_lstm_vae_multi.py --dataset SMD --save_csv_name train_lstm_vae_multi_0717.csv --gpu 1;
python train_donut_multi.py --dataset SMD --save_csv_name train_donut_multi_0717.csv --gpu 1;
python train_ts2vec_multi.py --dataset SMD --save_csv_name train_ts2vec_multi_0717.csv --gpu 1;
python train_dcdetector.py --dataset SMD --save_csv_name train_dcdetector_0717.csv --gpu 1;
python train_lstm_vae_multi.py --dataset MSL --save_csv_name train_lstm_vae_multi_0717.csv --gpu 1;
python train_donut_multi.py --dataset MSL --save_csv_name train_donut_multi_0717.csv --gpu 1;
python train_ts2vec_multi.py --dataset MSL --save_csv_name train_ts2vec_multi_0717.csv --gpu 1;
python train_dcdetector.py --dataset MSL --save_csv_name train_dcdetector_0717.csv --gpu 1;
python train_lstm_vae_multi.py --dataset SMAP --save_csv_name train_lstm_vae_multi_0717.csv --gpu 1;
python train_donut_multi.py --dataset SMAP --save_csv_name train_donut_multi_0717.csv --gpu 1;
python train_ts2vec_multi.py --dataset SMAP --save_csv_name train_ts2vec_multi_0717.csv --gpu 1;
python train_dcdetector.py --dataset SMAP --save_csv_name train_dcdetector_0717.csv --gpu 1;
python train_ts2vec_multi.py --dataset PSM --save_csv_name train_ts2vec_multi_0717.csv --gpu 1;
python train_dcdetector.py --dataset PSM --save_csv_name train_dcdetector_0717.csv --gpu 1;
python train_lstm_vae_multi.py --dataset SWAT --save_csv_name train_lstm_vae_multi_0717.csv --gpu 1;
python train_donut_multi.py --dataset SWAT --save_csv_name train_donut_multi_0717.csv --gpu 1;
python train_ts2vec_multi.py --dataset SWAT --save_csv_name train_ts2vec_multi_0717.csv --gpu 1;
python train_dcdetector.py --dataset SWAT --save_csv_name train_dcdetector_0717.csv --gpu 1;
python train_lstm_vae_multi.py --dataset NIPS_TS_Swan --save_csv_name train_lstm_vae_multi_0717.csv --gpu 1;
python train_donut_multi.py --dataset NIPS_TS_Swan --save_csv_name train_donut_multi_0717.csv --gpu 1;
python train_ts2vec_multi.py --dataset NIPS_TS_Swan --save_csv_name train_ts2vec_multi_0717.csv --gpu 1;
python train_dcdetector.py --dataset NIPS_TS_Swan --save_csv_name train_dcdetector_0717.csv --gpu 1;
python train_lstm_vae_multi.py --dataset NIPS_TS_Water --save_csv_name train_lstm_vae_multi_0717.csv --gpu 1;
python train_donut_multi.py --dataset NIPS_TS_Water --save_csv_name train_donut_multi_0717.csv --gpu 1;
python train_ts2vec_multi.py --dataset NIPS_TS_Water --save_csv_name train_ts2vec_multi_0717.csv --gpu 1;
python train_dcdetector.py --dataset NIPS_TS_Water --save_csv_name train_dcdetector_0717.csv --gpu 1;
python train_timesnet.py --data SMD --save_csv_name train_timesnet_0717.csv --gpu 1;
python train_gpt4ts.py --data SMD --save_csv_name train_gpt4ts_0717.csv --gpu 1;
python train_timesnet.py --data MSL --save_csv_name train_timesnet_0717.csv --gpu 1;
python train_gpt4ts.py --data MSL --save_csv_name train_gpt4ts_0717.csv --gpu 1;
python train_timesnet.py --data SMAP --save_csv_name train_timesnet_0717.csv --gpu 1;
python train_gpt4ts.py --data SMAP --save_csv_name train_gpt4ts_0717.csv --gpu 1;
python train_timesnet.py --data PSM --save_csv_name train_timesnet_0717.csv --gpu 1;
python train_gpt4ts.py --data PSM --save_csv_name train_gpt4ts_0717.csv --gpu 1;
python train_timesnet.py --data SWAT --save_csv_name train_timesnet_0717.csv --gpu 1;
python train_gpt4ts.py --data SWAT --save_csv_name train_gpt4ts_0717.csv --gpu 1;
python train_timesnet.py --data NIPS_TS_Swan --save_csv_name train_timesnet_0717.csv --gpu 1;
python train_gpt4ts.py --data NIPS_TS_Swan --save_csv_name train_gpt4ts_0717.csv --gpu 1;
python train_timesnet.py --data NIPS_TS_Water --save_csv_name train_timesnet_0717.csv --gpu 1;
python train_gpt4ts.py --data NIPS_TS_Water --save_csv_name train_gpt4ts_0717.csv --gpu 1;
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/scripts/ucr_at.sh
================================================
python train_dcdetector.py --index 38;
python train_dcdetector.py --index 54;
python train_dcdetector.py --index 71;
python train_dcdetector.py --index 72;
python train_dcdetector.py --index 79;
python train_dcdetector.py --index 85;
python train_dcdetector.py --index 88;
python train_dcdetector.py --index 108;
python train_dcdetector.py --index 146;
python train_dcdetector.py --index 162;
python train_dcdetector.py --index 179;
python train_dcdetector.py --index 180;
python train_dcdetector.py --index 187;
python train_dcdetector.py --index 193;
python train_dcdetector.py --index 196;
python train_dcdetector.py --index 203;
python train_dcdetector.py --index 212;
python train_dcdetector.py --index 229;
python train_dcdetector.py --index 232;
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/scripts/ucr_at_delta_0.sh
================================================
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 35 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 36 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 37 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 38 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 39 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 40 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 41 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 42 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 43 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 44 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 45 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 46 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 47 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 48 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 49 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 50 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 51 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 52 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 53 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 54 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 55 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 56 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 57 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 58 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 59 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 60 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 61 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 62 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 63 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 64 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 65 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 66 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 67 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 68 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 69 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 70 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 71 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 72 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 73 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 74 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 75 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 76 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 77 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 78 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 79 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 80 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 81 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 82 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 83 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 84 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 85 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 86 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 87 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 88 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 89 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 90 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 91 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 92 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 93 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 94 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 95 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 96 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 97 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 98 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 99 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 100 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 101 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 102 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 103 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 104 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 105 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 106 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 107 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 108 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 109 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 110 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 111 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 112 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 113 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 114 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 115 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 116 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 117 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 118 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 119 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 120 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 121 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 122 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 123 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 124 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 125 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 126 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 127 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 128 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 129 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 130 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 131 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 132 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 133 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 134 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 135 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 136 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 137 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 138 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 139 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 140 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 141 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 142 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 143 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 144 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 145 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 146 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 147 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 148 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 149 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 150 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 151 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 152 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 153 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 154 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 155 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 156 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 157 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 158 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 159 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 160 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 161 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 162 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 163 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 164 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 165 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 166 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 167 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 168 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 169 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 170 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 171 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 172 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 173 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 174 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 175 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 176 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 177 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 178 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 179 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 180 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 181 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 182 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 183 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 184 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 185 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 186 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 187 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 188 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 189 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 190 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 191 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 192 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 193 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 194 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 195 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 196 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 197 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 198 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 199 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 200 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 201 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 202 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 203 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 204 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 205 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 206 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 207 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 208 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 209 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 210 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 211 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 212 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 213 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 214 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 215 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 216 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 217 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 218 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 219 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 220 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 221 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 222 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 223 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 224 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 225 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 226 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 227 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 228 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 229 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 230 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 231 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 232 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 233 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 234 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 235 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 236 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 237 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 238 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 239 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 240 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 241 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 242 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 243 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 244 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 245 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 246 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 247 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 248 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 249 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 250 --save_csv_name train_timesnet_ucr_0717.csv --gpu 0;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 1 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 2 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 3 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 4 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 5 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 6 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 7 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 8 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 9 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 10 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 11 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 12 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 13 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 14 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 15 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 16 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 17 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 18 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 19 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 20 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 21 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 22 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 23 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 24 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 25 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 26 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 27 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 28 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 29 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 30 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 31 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 32 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 33 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 34 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 35 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 36 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 37 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 38 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 39 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 40 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 41 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 42 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 43 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 44 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 45 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 46 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 47 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 48 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 49 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 50 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 51 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 52 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 53 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 54 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 55 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 56 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 57 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 58 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 59 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 60 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 61 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 62 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 63 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 64 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 65 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 66 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 67 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 68 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 69 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 70 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 71 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 72 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 73 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 74 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 75 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 76 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 77 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 78 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 79 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 80 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 81 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 82 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 83 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 84 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 85 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 86 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 87 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 88 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 89 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 90 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 91 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 92 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 93 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 94 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 95 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 96 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 97 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 98 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 99 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 100 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 101 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 102 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 103 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 104 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 105 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 106 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 107 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 108 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 109 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 110 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 111 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 112 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 113 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 114 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 115 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 116 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 117 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 118 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 119 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 120 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 121 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 122 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 123 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 124 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 125 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 126 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 127 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 128 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 129 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 130 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 131 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 132 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 133 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 134 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 135 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 136 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 137 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 138 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 139 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 140 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 141 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 142 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 143 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 144 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 145 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 146 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 147 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 148 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 149 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 150 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 151 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 152 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 153 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 154 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 155 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 156 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 157 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 158 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 159 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 160 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 161 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 162 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 163 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 164 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 165 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 166 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 167 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 168 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 169 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 170 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 171 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 172 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 173 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 174 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 175 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 176 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 177 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 178 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 179 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 180 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 181 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 182 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 183 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 184 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 185 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 186 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 187 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 188 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 189 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 190 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 191 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 192 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 193 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 194 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 195 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 196 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 197 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 198 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 199 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 200 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 201 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 202 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 203 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 204 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 205 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 206 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 207 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 208 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 209 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 210 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 211 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 212 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 213 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 214 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 215 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 216 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 217 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 218 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 219 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 220 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 221 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 222 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 223 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 224 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 225 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 226 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 227 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 228 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 229 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 230 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 231 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 232 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 233 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 234 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 235 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 236 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 237 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 238 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 239 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 240 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 241 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 242 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 243 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 244 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 245 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 246 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 247 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 248 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 249 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 250 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/scripts/ucr_at_delta_1.sh
================================================
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 1 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 2 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 3 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 4 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 5 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 6 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 7 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 8 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 9 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 10 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 11 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 12 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 13 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 14 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 15 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 16 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 17 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 18 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 19 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 20 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 21 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 22 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 23 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 24 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 25 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 26 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 27 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 28 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 29 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 30 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 31 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 32 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 33 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 34 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 35 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 36 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 37 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 38 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 39 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 40 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 41 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 42 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 43 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 44 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 45 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 46 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 47 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 48 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 49 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 50 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 51 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 52 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 53 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 54 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 55 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 56 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 57 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 58 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 59 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 60 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 61 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 62 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 63 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 64 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 65 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 66 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 67 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 68 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 69 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 70 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 71 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 72 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 73 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 74 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 75 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 76 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 77 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 78 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 79 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 80 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 81 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 82 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 83 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 84 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 85 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 86 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 87 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 88 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 89 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 90 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 91 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 92 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 93 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 94 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 95 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 96 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 97 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 98 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 99 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 100 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 101 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 102 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 103 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 104 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 105 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 106 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 107 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 108 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 109 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 110 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 111 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 112 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 113 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 114 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 115 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 116 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 117 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 118 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 119 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 120 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 121 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 122 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 123 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 124 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 125 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 126 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 127 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 128 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 129 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 130 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 131 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 132 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 133 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 134 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 135 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 136 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 137 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 138 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 139 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 140 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 141 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 142 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 143 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 144 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 145 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 146 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 147 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 148 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 149 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 150 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 151 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 152 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 153 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 154 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 155 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 156 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 157 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 158 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 159 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 160 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 161 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 162 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 163 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 164 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 165 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 166 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 167 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 168 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 169 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 170 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 171 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 172 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 173 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 174 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 175 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 176 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 177 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 178 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 179 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 180 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 181 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 182 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 183 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 184 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 185 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 186 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 187 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 188 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 189 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 190 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 191 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 192 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 193 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 194 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 195 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 196 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 197 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 198 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 199 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 200 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 201 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 202 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 203 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 204 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 205 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 206 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 207 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 208 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 209 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 210 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 211 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 212 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 213 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 214 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 215 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 216 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 217 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 218 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 219 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 220 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 221 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 222 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 223 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 224 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 225 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 226 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 227 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 228 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 229 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 230 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 231 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 232 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 233 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 234 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 235 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 236 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 237 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 238 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 239 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 240 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 241 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 242 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 243 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 244 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 245 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 246 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 247 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 248 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 249 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
python train_gpt4ts.py --anomaly_ratio 0.5 --data UCR --index 250 --save_csv_name train_gpt4ts_ucr_0717.csv --gpu 1;
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/scripts/ucr_at_delta_1_2.sh
================================================
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 35 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 36 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 37 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 38 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 39 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 40 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 41 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 42 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 43 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 44 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 45 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 46 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 47 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 48 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 49 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 50 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 51 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 52 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 53 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 54 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 55 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 56 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 57 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 58 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 59 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 60 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 61 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 62 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 63 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 64 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 65 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 66 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 67 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 68 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 69 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 70 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 71 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 72 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 73 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 74 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 75 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 76 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 77 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 78 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 79 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 80 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
python train_timesnet.py --anomaly_ratio 0.5 --data UCR --index 81 --save_csv_name train_timesnet_ucr_0719.csv --gpu 1;
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/scripts/ucr_at_zeta0.sh
================================================
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 214 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 215 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 216 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 217 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 218 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 219 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 220 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 221 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 222 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 223 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 224 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 225 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 226 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 227 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 228 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 229 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 230 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 231 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 232 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 233 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 234 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 235 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 236 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 237 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 238 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 239 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 240 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 241 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 242 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 243 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 244 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 245 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 246 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 247 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 248 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 249 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
python train_at_multi.py --anormly_ratio 0.5 --dataset UCR --index 250 --save_csv_name train_at_multi_ucr_0719.csv --cuda cuda:1;
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/scripts/uni_at.sh
================================================
python train_gpt4ts_uni.py --anomaly_ratio 1 --data kpi --save_csv_name train_gpt4ts_uni_hm_0720.csv --gpu 1;
python train_gpt4ts_uni.py --anomaly_ratio 1 --data yahoo --save_csv_name train_gpt4ts_uni_hm_0720.csv --gpu 0;
python train_timesnet_uni.py --anomaly_ratio 1 --data kpi --save_csv_name train_timesnet_uni_hm_0720.csv --gpu 0;
python train_timesnet_uni.py --anomaly_ratio 1 --data yahoo --save_csv_name train_timesnet_uni_hm_0720.csv --gpu 0;
python train_dcdetector_nui.py --anormly_ratio 1 --dataset kpi --save_csv_name train_dcdetector_nui_hm_0720.csv --gpu 0;
python train_dcdetector_nui.py --anormly_ratio 1 --dataset yahoo --save_csv_name train_dcdetector_nui_hm_0720.csv --gpu 1;
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/scripts/yahoo.sh
================================================
python train_dcdetector_nui.py --anormly_ratio 1 --dataset kpi --save_csv_name train_dcdetector_nui_hm_0720.csv --gpu 0;
python train_dcdetector_nui.py --anormly_ratio 1 --dataset yahoo --save_csv_name train_dcdetector_nui_hm_0720.csv --gpu 0;
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/spot.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Dec 12 10:08:16 2016
@author: Alban Siffer
@company: Amossys
@license: GNU GPLv3
"""
from scipy.optimize import minimize
from math import log,floor
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tqdm
# colors for plot
deep_saffron = '#FF9933'
air_force_blue = '#5D8AA8'
"""
================================= MAIN CLASS ==================================
"""
class SPOT:
"""
This class allows to run SPOT algorithm on univariate dataset (upper-bound)
Attributes
----------
proba : float
Detection level (risk), chosen by the user
extreme_quantile : float
current threshold (bound between normal and abnormal events)
data : numpy.array
stream
init_data : numpy.array
initial batch of observations (for the calibration/initialization step)
init_threshold : float
initial threshold computed during the calibration step
peaks : numpy.array
array of peaks (excesses above the initial threshold)
n : int
number of observed values
Nt : int
number of observed peaks
"""
def __init__(self, q = 1e-4):
"""
Constructor
Parameters
----------
q
Detection level (risk)
Returns
----------
SPOT object
"""
self.proba = q
self.extreme_quantile = None
self.data = None
self.init_data = None
self.init_threshold = None
self.peaks = None
self.n = 0
self.Nt = 0
def __str__(self):
s = ''
s += 'Streaming Peaks-Over-Threshold Object\n'
s += 'Detection level q = %s\n' % self.proba
if self.data is not None:
s += 'Data imported : Yes\n'
s += '\t initialization : %s values\n' % self.init_data.size
s += '\t stream : %s values\n' % self.data.size
else:
s += 'Data imported : No\n'
return s
if self.n == 0:
s += 'Algorithm initialized : No\n'
else:
s += 'Algorithm initialized : Yes\n'
s += '\t initial threshold : %s\n' % self.init_threshold
r = self.n-self.init_data.size
if r > 0:
s += 'Algorithm run : Yes\n'
s += '\t number of observations : %s (%.2f %%)\n' % (r,100*r/self.n)
else:
s += '\t number of peaks : %s\n' % self.Nt
s += '\t extreme quantile : %s\n' % self.extreme_quantile
s += 'Algorithm run : No\n'
return s
def fit(self,init_data,data):
"""
Import data to SPOT object
Parameters
----------
init_data : list, numpy.array or pandas.Series
initial batch to calibrate the algorithm
data : numpy.array
data for the run (list, np.array or pd.series)
"""
# print("init_data.shape = ", init_data.shape, ", data.shape = ", data.shape)
if isinstance(data,list):
self.data = np.array(data)
elif isinstance(data,np.ndarray):
self.data = data
elif isinstance(data,pd.Series):
self.data = data.values
else:
print('This data format (%s) is not supported' % type(data))
return
if isinstance(init_data,list):
self.init_data = np.array(init_data)
elif isinstance(init_data,np.ndarray):
self.init_data = init_data
elif isinstance(init_data,pd.Series):
self.init_data = init_data.values
elif isinstance(init_data,int):
self.init_data = self.data[:init_data]
self.data = self.data[init_data:]
elif isinstance(init_data,float) & (init_data<1) & (init_data>0):
r = int(init_data*data.size)
self.init_data = self.data[:r]
self.data = self.data[r:]
else:
print('The initial data cannot be set')
return
def add(self,data):
"""
This function allows to append data to the already fitted data
Parameters
----------
data : list, numpy.array, pandas.Series
data to append
"""
if isinstance(data,list):
data = np.array(data)
elif isinstance(data,np.ndarray):
data = data
elif isinstance(data,pd.Series):
data = data.values
else:
print('This data format (%s) is not supported' % type(data))
return
self.data = np.append(self.data,data)
return
def initialize(self, level = 0.98, verbose = True):
"""
Run the calibration (initialization) step
Parameters
----------
level : float
(default 0.98) Probability associated with the initial threshold t
verbose : bool
(default = True) If True, gives details about the batch initialization
"""
level = level-floor(level)
n_init = self.init_data.size
S = np.sort(self.init_data) # we sort X to get the empirical quantile
self.init_threshold = S[int(level*n_init)] # t is fixed for the whole algorithm
# initial peaks
self.peaks = self.init_data[self.init_data>self.init_threshold]-self.init_threshold
self.Nt = self.peaks.size
self.n = n_init
if verbose:
print('Initial threshold : %s' % self.init_threshold)
print('Number of peaks : %s' % self.Nt)
print('Grimshaw maximum log-likelihood estimation ... ', end = '')
g,s,l = self._grimshaw()
self.extreme_quantile = self._quantile(g,s)
if verbose:
print('[done]')
print('\t'+chr(0x03B3) + ' = ' + str(g))
print('\t'+chr(0x03C3) + ' = ' + str(s))
print('\tL = ' + str(l))
print('Extreme quantile (probability = %s): %s' % (self.proba,self.extreme_quantile))
return
def _rootsFinder(fun,jac,bounds,npoints,method):
"""
Find possible roots of a scalar function
Parameters
----------
fun : function
scalar function
jac : function
first order derivative of the function
bounds : tuple
(min,max) interval for the roots search
npoints : int
maximum number of roots to output
method : str
'regular' : regular sample of the search interval, 'random' : uniform (distribution) sample of the search interval
Returns
----------
numpy.array
possible roots of the function
"""
if method == 'regular':
step = (bounds[1]-bounds[0])/(npoints+1)
# print("step = ", step, ", bounds[0] = ", bounds[0], ", bounds[1] = ", bounds[1])
if step == 0:
X0 = np.random.uniform(bounds[0],bounds[1],npoints)
else:
X0 = np.arange(bounds[0]+step,bounds[1],step)
## for the ucr 239 240 241
# step = (bounds[1] - bounds[0]) / (npoints+1)
# # print("step = ", step, ", bounds[0] = ", bounds[0], ", bounds[1] = ", bounds[1])
# X0 = np.arange(bounds[0], bounds[1], step)
elif method == 'random':
X0 = np.random.uniform(bounds[0],bounds[1],npoints)
def objFun(X,f,jac):
g = 0
j = np.zeros(X.shape)
i = 0
for x in X:
fx = f(x)
g = g+fx**2
j[i] = 2*fx*jac(x)
i = i+1
return g,j
opt = minimize(lambda X:objFun(X,fun,jac), X0,
method='L-BFGS-B',
jac=True, bounds=[bounds]*len(X0))
X = opt.x
np.round(X,decimals = 5)
return np.unique(X)
def _log_likelihood(Y,gamma,sigma):
"""
Compute the log-likelihood for the Generalized Pareto Distribution (μ=0)
Parameters
----------
Y : numpy.array
observations
gamma : float
GPD index parameter
sigma : float
GPD scale parameter (>0)
Returns
----------
float
log-likelihood of the sample Y to be drawn from a GPD(γ,σ,μ=0)
"""
n = Y.size
if gamma != 0:
tau = gamma/sigma
L = -n * log(sigma) - ( 1 + (1/gamma) ) * ( np.log(1+tau*Y) ).sum()
else:
L = n * ( 1 + log(Y.mean()) )
return L
def _grimshaw(self,epsilon = 1e-8, n_points = 10):
"""
Compute the GPD parameters estimation with the Grimshaw's trick
Parameters
----------
epsilon : float
numerical parameter to perform (default : 1e-8)
n_points : int
maximum number of candidates for maximum likelihood (default : 10)
Returns
----------
gamma_best,sigma_best,ll_best
gamma estimates, sigma estimates and corresponding log-likelihood
"""
def u(s):
return 1 + np.log(s).mean()
def v(s):
return np.mean(1/s)
def w(Y,t):
s = 1+t*Y
us = u(s)
vs = v(s)
return us*vs-1
def jac_w(Y,t):
s = 1+t*Y
us = u(s)
vs = v(s)
jac_us = (1/t)*(1-vs)
jac_vs = (1/t)*(-vs+np.mean(1/s**2))
return us*jac_vs+vs*jac_us
Ym = self.peaks.min()
YM = self.peaks.max()
Ymean = self.peaks.mean()
a = -1/YM
if abs(a)<2*epsilon:
epsilon = abs(a)/n_points
a = a + epsilon
b = 2*(Ymean-Ym)/(Ymean*Ym)
c = 2*(Ymean-Ym)/(Ym**2)
# We look for possible roots
left_zeros = SPOT._rootsFinder(lambda t: w(self.peaks,t),
lambda t: jac_w(self.peaks,t),
(a+epsilon,-epsilon),
n_points,'regular')
right_zeros = SPOT._rootsFinder(lambda t: w(self.peaks,t),
lambda t: jac_w(self.peaks,t),
(b,c),
n_points,'regular')
# all the possible roots
zeros = np.concatenate((left_zeros,right_zeros))
# 0 is always a solution so we initialize with it
gamma_best = 0
sigma_best = Ymean
ll_best = SPOT._log_likelihood(self.peaks,gamma_best,sigma_best)
# we look for better candidates
for z in zeros:
gamma = u(1+z*self.peaks)-1
sigma = gamma/z
ll = SPOT._log_likelihood(self.peaks,gamma,sigma)
if ll>ll_best:
gamma_best = gamma
sigma_best = sigma
ll_best = ll
return gamma_best,sigma_best,ll_best
def _quantile(self,gamma,sigma):
"""
Compute the quantile at level 1-q
Parameters
----------
gamma : float
GPD parameter
sigma : float
GPD parameter
Returns
----------
float
quantile at level 1-q for the GPD(γ,σ,μ=0)
"""
r = self.n * self.proba / self.Nt
if gamma != 0:
return self.init_threshold + (sigma/gamma)*(pow(r,-gamma)-1)
else:
return self.init_threshold - sigma*log(r)
def run(self, with_alarm = True):
"""
Run SPOT on the stream
Parameters
----------
with_alarm : bool
(default = True) If False, SPOT will adapt the threshold assuming \
there is no abnormal values
Returns
----------
dict
keys : 'thresholds' and 'alarms'
'thresholds' contains the extreme quantiles and 'alarms' contains \
the indexes of the values which have triggered alarms
"""
if (self.n>self.init_data.size):
print('Warning : the algorithm seems to have already been run, you \
should initialize before running again')
return {}
# list of the thresholds
th = []
alarm = []
scores = []
# Loop over the stream
for i in tqdm.tqdm(range(self.data.size)):
scores.append(self.data[i])
# If the observed value exceeds the current threshold (alarm case)
if self.data[i]>self.extreme_quantile:
# if we want to alarm, we put it in the alarm list
if with_alarm:
alarm.append(i)
# otherwise we add it in the peaks
else:
self.peaks = np.append(self.peaks,self.data[i]-self.init_threshold)
self.Nt += 1
self.n += 1
# and we update the thresholds
g,s,l = self._grimshaw()
self.extreme_quantile = self._quantile(g,s)
# case where the value exceeds the initial threshold but not the alarm ones
elif self.data[i]>self.init_threshold:
# we add it in the peaks
self.peaks = np.append(self.peaks,self.data[i]-self.init_threshold)
self.Nt += 1
self.n += 1
# and we update the thresholds
g,s,l = self._grimshaw()
self.extreme_quantile = self._quantile(g,s)
else:
self.n += 1
th.append(self.extreme_quantile) # thresholds record
return {'thresholds' : th, 'alarms': alarm, 'scores': scores}
def plot(self,run_results,with_alarm = True):
"""
Plot the results of given by the run
Parameters
----------
run_results : dict
results given by the 'run' method
with_alarm : bool
(default = True) If True, alarms are plotted.
Returns
----------
list
list of the plots
"""
x = range(self.data.size)
K = run_results.keys()
ts_fig, = plt.plot(x,self.data,color=air_force_blue)
fig = [ts_fig]
if 'thresholds' in K:
th = run_results['thresholds']
th_fig, = plt.plot(x,th,color=deep_saffron,lw=2,ls='dashed')
fig.append(th_fig)
if with_alarm and ('alarms' in K):
alarm = run_results['alarms']
al_fig = plt.scatter(alarm,self.data[alarm],color='red')
fig.append(al_fig)
plt.xlim((0,self.data.size))
return fig
"""
================================= WITH DRIFT ==================================
"""
def backMean(X,d):
M = []
w = X[:d].sum()
M.append(w/d)
for i in range(d,len(X)):
w = w - X[i-d] + X[i]
M.append(w/d)
return np.array(M)
class dSPOT:
"""
This class allows to run DSPOT algorithm on univariate dataset (upper-bound)
Attributes
----------
proba : float
Detection level (risk), chosen by the user
depth : int
Number of observations to compute the moving average
extreme_quantile : float
current threshold (bound between normal and abnormal events)
data : numpy.array
stream
init_data : numpy.array
initial batch of observations (for the calibration/initialization step)
init_threshold : float
initial threshold computed during the calibration step
peaks : numpy.array
array of peaks (excesses above the initial threshold)
n : int
number of observed values
Nt : int
number of observed peaks
"""
def __init__(self, q, depth):
self.proba = q
self.extreme_quantile = None
self.data = None
self.init_data = None
self.init_threshold = None
self.peaks = None
self.n = 0
self.Nt = 0
self.depth = depth
def __str__(self):
s = ''
s += 'Streaming Peaks-Over-Threshold Object\n'
s += 'Detection level q = %s\n' % self.proba
if self.data is not None:
s += 'Data imported : Yes\n'
s += '\t initialization : %s values\n' % self.init_data.size
s += '\t stream : %s values\n' % self.data.size
else:
s += 'Data imported : No\n'
return s
if self.n == 0:
s += 'Algorithm initialized : No\n'
else:
s += 'Algorithm initialized : Yes\n'
s += '\t initial threshold : %s\n' % self.init_threshold
r = self.n-self.init_data.size
if r > 0:
s += 'Algorithm run : Yes\n'
s += '\t number of observations : %s (%.2f %%)\n' % (r,100*r/self.n)
s += '\t triggered alarms : %s (%.2f %%)\n' % (len(self.alarm),100*len(self.alarm)/self.n)
else:
s += '\t number of peaks : %s\n' % self.Nt
s += '\t extreme quantile : %s\n' % self.extreme_quantile
s += 'Algorithm run : No\n'
return s
def fit(self,init_data,data):
"""
Import data to DSPOT object
Parameters
----------
init_data : list, numpy.array or pandas.Series
initial batch to calibrate the algorithm
data : numpy.array
data for the run (list, np.array or pd.series)
"""
if isinstance(data,list):
self.data = np.array(data)
elif isinstance(data,np.ndarray):
self.data = data
elif isinstance(data,pd.Series):
self.data = data.values
else:
print('This data format (%s) is not supported' % type(data))
return
if isinstance(init_data,list):
self.init_data = np.array(init_data)
elif isinstance(init_data,np.ndarray):
self.init_data = init_data
elif isinstance(init_data,pd.Series):
self.init_data = init_data.values
elif isinstance(init_data,int):
self.init_data = self.data[:init_data]
self.data = self.data[init_data:]
elif isinstance(init_data,float) & (init_data<1) & (init_data>0):
r = int(init_data*data.size)
self.init_data = self.data[:r]
self.data = self.data[r:]
else:
print('The initial data cannot be set')
return
def add(self,data):
"""
This function allows to append data to the already fitted data
Parameters
----------
data : list, numpy.array, pandas.Series
data to append
"""
if isinstance(data,list):
data = np.array(data)
elif isinstance(data,np.ndarray):
data = data
elif isinstance(data,pd.Series):
data = data.values
else:
print('This data format (%s) is not supported' % type(data))
return
self.data = np.append(self.data,data)
return
def initialize(self, verbose = True):
"""
Run the calibration (initialization) step
Parameters
----------
verbose : bool
(default = True) If True, gives details about the batch initialization
"""
n_init = self.init_data.size - self.depth
M = backMean(self.init_data,self.depth)
T = self.init_data[self.depth:]-M[:-1] # new variable
S = np.sort(T) # we sort X to get the empirical quantile
self.init_threshold = S[int(0.98*n_init)] # t is fixed for the whole algorithm
# initial peaks
self.peaks = T[T>self.init_threshold]-self.init_threshold
self.Nt = self.peaks.size
self.n = n_init
if verbose:
print('Initial threshold : %s' % self.init_threshold)
print('Number of peaks : %s' % self.Nt)
print('Grimshaw maximum log-likelihood estimation ... ', end = '')
g,s,l = self._grimshaw()
self.extreme_quantile = self._quantile(g,s)
if verbose:
print('[done]')
print('\t'+chr(0x03B3) + ' = ' + str(g))
print('\t'+chr(0x03C3) + ' = ' + str(s))
print('\tL = ' + str(l))
print('Extreme quantile (probability = %s): %s' % (self.proba,self.extreme_quantile))
return
def _rootsFinder(fun,jac,bounds,npoints,method):
"""
Find possible roots of a scalar function
Parameters
----------
fun : function
scalar function
jac : function
first order derivative of the function
bounds : tuple
(min,max) interval for the roots search
npoints : int
maximum number of roots to output
method : str
'regular' : regular sample of the search interval, 'random' : uniform (distribution) sample of the search interval
Returns
----------
numpy.array
possible roots of the function
"""
if method == 'regular':
step = (bounds[1]-bounds[0])/(npoints+1)
X0 = np.arange(bounds[0]+step,bounds[1],step)
elif method == 'random':
X0 = np.random.uniform(bounds[0],bounds[1],npoints)
def objFun(X,f,jac):
g = 0
j = np.zeros(X.shape)
i = 0
for x in X:
fx = f(x)
g = g+fx**2
j[i] = 2*fx*jac(x)
i = i+1
return g,j
opt = minimize(lambda X:objFun(X,fun,jac), X0,
method='L-BFGS-B',
jac=True, bounds=[bounds]*len(X0))
X = opt.x
np.round(X,decimals = 5)
return np.unique(X)
def _log_likelihood(Y,gamma,sigma):
"""
Compute the log-likelihood for the Generalized Pareto Distribution (μ=0)
Parameters
----------
Y : numpy.array
observations
gamma : float
GPD index parameter
sigma : float
GPD scale parameter (>0)
Returns
----------
float
log-likelihood of the sample Y to be drawn from a GPD(γ,σ,μ=0)
"""
n = Y.size
if gamma != 0:
tau = gamma/sigma
L = -n * log(sigma) - ( 1 + (1/gamma) ) * ( np.log(1+tau*Y) ).sum()
else:
L = n * ( 1 + log(Y.mean()) )
return L
def _grimshaw(self,epsilon = 1e-8, n_points = 10):
"""
Compute the GPD parameters estimation with the Grimshaw's trick
Parameters
----------
epsilon : float
numerical parameter to perform (default : 1e-8)
n_points : int
maximum number of candidates for maximum likelihood (default : 10)
Returns
----------
gamma_best,sigma_best,ll_best
gamma estimates, sigma estimates and corresponding log-likelihood
"""
def u(s):
return 1 + np.log(s).mean()
def v(s):
return np.mean(1/s)
def w(Y,t):
s = 1+t*Y
us = u(s)
vs = v(s)
return us*vs-1
def jac_w(Y,t):
s = 1+t*Y
us = u(s)
vs = v(s)
jac_us = (1/t)*(1-vs)
jac_vs = (1/t)*(-vs+np.mean(1/s**2))
return us*jac_vs+vs*jac_us
Ym = self.peaks.min()
YM = self.peaks.max()
Ymean = self.peaks.mean()
a = -1/YM
if abs(a)<2*epsilon:
epsilon = abs(a)/n_points
a = a + epsilon
b = 2*(Ymean-Ym)/(Ymean*Ym)
c = 2*(Ymean-Ym)/(Ym**2)
# We look for possible roots
left_zeros = SPOT._rootsFinder(lambda t: w(self.peaks,t),
lambda t: jac_w(self.peaks,t),
(a+epsilon,-epsilon),
n_points,'regular')
right_zeros = SPOT._rootsFinder(lambda t: w(self.peaks,t),
lambda t: jac_w(self.peaks,t),
(b,c),
n_points,'regular')
# all the possible roots
zeros = np.concatenate((left_zeros,right_zeros))
# 0 is always a solution so we initialize with it
gamma_best = 0
sigma_best = Ymean
ll_best = SPOT._log_likelihood(self.peaks,gamma_best,sigma_best)
# we look for better candidates
for z in zeros:
gamma = u(1+z*self.peaks)-1
sigma = gamma/z
ll = dSPOT._log_likelihood(self.peaks,gamma,sigma)
if ll>ll_best:
gamma_best = gamma
sigma_best = sigma
ll_best = ll
return gamma_best,sigma_best,ll_best
def _quantile(self,gamma,sigma):
"""
Compute the quantile at level 1-q
Parameters
----------
gamma : float
GPD parameter
sigma : float
GPD parameter
Returns
----------
float
quantile at level 1-q for the GPD(γ,σ,μ=0)
"""
r = self.n * self.proba / self.Nt
if gamma != 0:
return self.init_threshold + (sigma/gamma)*(pow(r,-gamma)-1)
else:
return self.init_threshold - sigma*log(r)
def run(self, with_alarm = True):
"""
Run biSPOT on the stream
Parameters
----------
with_alarm : bool
(default = True) If False, SPOT will adapt the threshold assuming \
there is no abnormal values
Returns
----------
dict
keys : 'upper_thresholds', 'lower_thresholds' and 'alarms'
'***-thresholds' contains the extreme quantiles and 'alarms' contains \
the indexes of the values which have triggered alarms
"""
if (self.n>self.init_data.size):
print('Warning : the algorithm seems to have already been run, you \
should initialize before running again')
return {}
# actual normal window
W = self.init_data[-self.depth:]
# list of the thresholds
th = []
alarm = []
scores = []
# Loop over the stream
for i in tqdm.tqdm(range(self.data.size)):
Mi = W.mean()
scores.append((self.data[i]-Mi))
# If the observed value exceeds the current threshold (alarm case)
if (self.data[i]-Mi)>self.extreme_quantile:
# if we want to alarm, we put it in the alarm list
if with_alarm:
alarm.append(i)
# otherwise we add it in the peaks
else:
self.peaks = np.append(self.peaks,self.data[i]-Mi-self.init_threshold)
self.Nt += 1
self.n += 1
# and we update the thresholds
g,s,l = self._grimshaw()
self.extreme_quantile = self._quantile(g,s) #+ Mi
W = np.append(W[1:],self.data[i])
# case where the value exceeds the initial threshold but not the alarm ones
elif (self.data[i]-Mi)>self.init_threshold:
# we add it in the peaks
self.peaks = np.append(self.peaks,self.data[i]-Mi-self.init_threshold)
self.Nt += 1
self.n += 1
# and we update the thresholds
g,s,l = self._grimshaw()
self.extreme_quantile = self._quantile(g,s) #+ Mi
W = np.append(W[1:],self.data[i])
else:
self.n += 1
W = np.append(W[1:],self.data[i])
th.append(self.extreme_quantile+Mi) # thresholds record
return {'thresholds' : th, 'alarms': alarm, 'scores': scores}
def plot(self,run_results, with_alarm = True):
"""
Plot the results given by the run
Parameters
----------
run_results : dict
results given by the 'run' method
with_alarm : bool
(default = True) If True, alarms are plotted.
Returns
----------
list
list of the plots
"""
x = range(self.data.size)
K = run_results.keys()
ts_fig, = plt.plot(x,self.data,color=air_force_blue)
fig = [ts_fig]
# if 'upper_thresholds' in K:
# thup = run_results['upper_thresholds']
# uth_fig, = plt.plot(x,thup,color=deep_saffron,lw=2,ls='dashed')
# fig.append(uth_fig)
#
# if 'lower_thresholds' in K:
# thdown = run_results['lower_thresholds']
# lth_fig, = plt.plot(x,thdown,color=deep_saffron,lw=2,ls='dashed')
# fig.append(lth_fig)
if 'thresholds' in K:
th = run_results['thresholds']
th_fig, = plt.plot(x,th,color=deep_saffron,lw=2,ls='dashed')
fig.append(th_fig)
if with_alarm and ('alarms' in K):
alarm = run_results['alarms']
if len(alarm)>0:
plt.scatter(alarm,self.data[alarm],color='red')
plt.xlim((0,self.data.size))
return fig
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/tasks/__init__.py
================================================
from .anomaly_detection import eval_anomaly_detection, eval_anomaly_detection_coldstart
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/tasks/anomaly_detection.py
================================================
import numpy as np
import time
import bottleneck as bn
from sklearn.metrics import f1_score, precision_score, recall_score
from other_anomaly_baselines.metrics.affiliation.metrics import pr_from_events
from other_anomaly_baselines.metrics.vus.metrics import get_range_vus_roc
from other_anomaly_baselines.metrics.affiliation.generics import convert_vector_to_events
from sklearn.metrics import f1_score, precision_score, recall_score
import bottleneck as bn
import pdb
from tadpak import evaluate
# consider delay threshold and missing segments
def get_range_proba(predict, label, delay=7):
splits = np.where(label[1:] != label[:-1])[0] + 1
is_anomaly = label[0] == 1
new_predict = np.array(predict)
pos = 0
for sp in splits:
if is_anomaly:
if 1 in predict[pos:min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
is_anomaly = not is_anomaly
pos = sp
sp = len(label)
if is_anomaly: # anomaly in the end
if 1 in predict[pos: min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
return new_predict
# consider delay threshold and missing segments
def get_range_proba(predict, label, delay=7):
splits = np.where(label[1:] != label[:-1])[0] + 1
is_anomaly = label[0] == 1
new_predict = np.array(predict)
pos = 0
for sp in splits:
if is_anomaly:
if 1 in predict[pos:min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
is_anomaly = not is_anomaly
pos = sp
sp = len(label)
if is_anomaly: # anomaly in the end
if 1 in predict[pos: min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
return new_predict
# set missing = 0
def reconstruct_label(timestamp, label):
timestamp = np.asarray(timestamp, np.int64)
index = np.argsort(timestamp)
timestamp_sorted = np.asarray(timestamp[index])
interval = np.min(np.diff(timestamp_sorted))
label = np.asarray(label, np.int64)
label = np.asarray(label[index])
idx = (timestamp_sorted - timestamp_sorted[0]) // interval
new_label = np.zeros(shape=((timestamp_sorted[-1] - timestamp_sorted[0]) // interval + 1,), dtype=np.int)
new_label[idx] = label
return new_label
def eval_ad_result(test_pred_list, test_labels_list, test_timestamps_list, delay, pred_scores=None):
labels = []
pred = []
ts_scores = []
if pred_scores is not None:
for test_pred, test_labels, test_timestamps, test_score in zip(test_pred_list, test_labels_list, test_timestamps_list, pred_scores):
# assert test_pred.shape == test_labels.shape == test_timestamps.shape
min_len = min(min(test_pred.shape[0], test_labels.shape[0]), test_timestamps.shape[0])
test_pred = test_pred[:min_len]
test_labels = test_labels[:min_len]
test_timestamps = test_timestamps[:min_len]
test_score = test_score[:min_len]
min_len = min(min(test_pred.shape[0], test_labels.shape[0]), test_timestamps.shape[0])
test_pred = test_pred[:min_len]
test_labels = test_labels[:min_len]
test_timestamps = test_timestamps[:min_len]
test_labels = reconstruct_label(test_timestamps, test_labels)
test_pred = reconstruct_label(test_timestamps, test_pred)
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
ts_scores.append(test_score)
else:
for test_pred, test_labels, test_timestamps in zip(test_pred_list, test_labels_list, test_timestamps_list):
# assert test_pred.shape == test_labels.shape == test_timestamps.shape
test_labels = reconstruct_label(test_timestamps, test_labels)
test_pred = reconstruct_label(test_timestamps, test_pred)
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
labels = np.concatenate(labels)
pred = np.concatenate(pred)
if pred_scores is not None:
ts_scores = np.concatenate(ts_scores)
events_pred = convert_vector_to_events(pred)
events_gt = convert_vector_to_events(labels)
Trange = (0, len(labels))
affiliation = pr_from_events(events_pred, events_gt, Trange)
vus_results = get_range_vus_roc(labels, pred, 100) # default slidingWindow = 100
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"]
}
if pred_scores is not None:
# pred_scores = np.asarray(res_log_socres, np.float64)[0]
# labels = np.asarray(labels_log, np.int64)[0]
min_len1 = min(ts_scores.shape[0], labels.shape[0])
results_f1_pa_k_10 = evaluate.evaluate(ts_scores[:min_len1], labels[:min_len1], k=10)
# results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
results_f1_pa_k_50 = evaluate.evaluate(ts_scores[:min_len1], labels[:min_len1], k=50)
results_f1_pa_k_90 = evaluate.evaluate(ts_scores[:min_len1], labels[:min_len1], k=90)
eval_res['f1_pa_10'] = results_f1_pa_k_10['best_f1_w_pa']
eval_res['f1_pa_50'] = results_f1_pa_k_50['best_f1_w_pa']
eval_res['f1_pa_90'] = results_f1_pa_k_90['best_f1_w_pa']
return eval_res
def np_shift(arr, num, fill_value=np.nan):
result = np.empty_like(arr)
if num > 0:
result[:num] = fill_value
result[num:] = arr[:-num]
elif num < 0:
result[num:] = fill_value
result[:num] = arr[-num:]
else:
result[:] = arr
return result
def adjustment(gt, pred):
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
return gt, pred
def eval_anomaly_detection(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay, is_multi=False, ucr_index=None):
t = time.time()
all_train_repr = {}
all_test_repr = {}
all_train_repr_wom = {}
all_test_repr_wom = {}
if is_multi:
train_data = all_train_data
test_data = all_test_data
if test_data.shape[-1] > 2:
re_t = test_data.shape[-1]
else:
re_t = 1
full_repr = model.encode(
np.concatenate([train_data, test_data]).reshape(1, -1, re_t),
mask='mask_last',
casual=True,
sliding_length=1,
sliding_padding=200,
batch_size=256
).squeeze()
all_train_repr[0] = full_repr[:len(train_data)] # (n_timestamps, repr-dims)
all_test_repr[0] = full_repr[len(train_data):] # (n_timestamps, repr-dims)
full_repr_wom = model.encode(
np.concatenate([train_data, test_data]).reshape(1, -1, re_t),
casual=True,
sliding_length=1,
sliding_padding=200,
batch_size=256
).squeeze()
all_train_repr_wom[0] = full_repr_wom[:len(train_data)] # (n_timestamps, repr-dims)
all_test_repr_wom[0] = full_repr_wom[len(train_data):] # (n_timestamps, repr-dims)
else:
for k in all_train_data:
train_data = all_train_data[k]
test_data = all_test_data[k]
full_repr = model.encode(
np.concatenate([train_data, test_data]).reshape(1, -1, 1),
mask='mask_last',
casual=True,
sliding_length=1,
sliding_padding=200,
batch_size=256
).squeeze()
all_train_repr[k] = full_repr[:len(train_data)] # (n_timestamps, repr-dims)
all_test_repr[k] = full_repr[len(train_data):] # (n_timestamps, repr-dims)
full_repr_wom = model.encode(
np.concatenate([train_data, test_data]).reshape(1, -1, 1),
casual=True,
sliding_length=1,
sliding_padding=200,
batch_size=256
).squeeze()
all_train_repr_wom[k] = full_repr_wom[:len(train_data)] # (n_timestamps, repr-dims)
all_test_repr_wom[k] = full_repr_wom[len(train_data):] # (n_timestamps, repr-dims)
# print(np.shape(all_train_repr[k]))
# print(np.shape(all_test_repr[k]))
# print(np.shape(all_train_repr_wom[k]))
# print(np.shape(all_test_repr_wom[k]))
# print("#####################")
# raise Exception('my personal exception!')
res_log = []
res_log_socres = []
labels_log = []
timestamps_log = []
if is_multi:
test_labels = all_test_labels
test_timestamps = all_test_timestamps
train_err = np.abs(all_train_repr_wom[0] - all_train_repr[0]).sum(axis=1)
test_err = np.abs(all_test_repr_wom[0] - all_test_repr[0]).sum(axis=1)
ma = np_shift(bn.move_mean(np.concatenate([train_err, test_err]), 21), 1)
train_err_adj = (train_err - ma[:len(train_err)]) / ma[:len(train_err)]
test_err_adj = (test_err - ma[len(train_err):]) / ma[len(train_err):]
train_err_adj = train_err_adj[22:]
thr = np.mean(train_err_adj) + 4 * np.std(train_err_adj)
test_res = (test_err_adj > thr) * 1
res_log_socres.append(test_err_adj)
for i in range(len(test_res)):
if i >= delay and test_res[i - delay:i].sum() >= 1:
test_res[i] = 0
res_log.append(test_res)
labels_log.append(test_labels)
timestamps_log.append(test_timestamps)
else:
for k in all_train_data:
test_labels = all_test_labels[k]
test_timestamps = all_test_timestamps[k]
train_err = np.abs(all_train_repr_wom[k] - all_train_repr[k]).sum(axis=1)
test_err = np.abs(all_test_repr_wom[k] - all_test_repr[k]).sum(axis=1)
ma = np_shift(bn.move_mean(np.concatenate([train_err, test_err]), 21), 1)
train_err_adj = (train_err - ma[:len(train_err)]) / ma[:len(train_err)]
test_err_adj = (test_err - ma[len(train_err):]) / ma[len(train_err):]
train_err_adj = train_err_adj[22:]
thr = np.mean(train_err_adj) + 4 * np.std(train_err_adj)
test_res = (test_err_adj > thr) * 1
res_log_socres.append(test_err_adj)
for i in range(len(test_res)):
if i >= delay and test_res[i-delay:i].sum() >= 1:
test_res[i] = 0
res_log.append(test_res)
labels_log.append(test_labels)
timestamps_log.append(test_timestamps)
t = time.time() - t
if is_multi:
labels = np.asarray(labels_log, np.int64)[0]
pred = np.asarray(res_log, np.int64)[0]
# print("labels.shape = ", labels.shape, labels[:5])
# print("pred.shape = ", pred.shape, pred[:5])
events_pred = convert_vector_to_events(pred)
events_gt = convert_vector_to_events(labels)
Trange = (0, len(labels))
# print("labels.shape = ", labels.shape, "pred.shape = ", pred.shape)
# print("events_pred.shape = ", len(events_pred), ", events_gt.shape = ", len(events_gt), ", Trange = ", Trange)
if ucr_index == 79 or ucr_index == 108 or ucr_index == 187 or ucr_index == 203:
pred_scores = np.asarray(res_log_socres, np.float64)[0]
# results_f1_pa_k_10 = evaluate.evaluate(pred_scores, labels, k=10)
# # results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
# results_f1_pa_k_50 = evaluate.evaluate(pred_scores, labels, k=50)
# results_f1_pa_k_90 = evaluate.evaluate(pred_scores, labels, k=90)
labels, pred = adjustment(labels, pred)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": None,
"Affiliation recall": None,
"R_AUC_ROC": None,
"R_AUC_PR": None,
"VUS_ROC": None,
"VUS_PR": None,
'f1_pa_10': None,
# 'results_f1_pa_k_10_th_w_pa': results_f1_pa_k_10['pa_f1_scores'],
'f1_pa_50': None,
# 'results_f1_pa_k_50_th_w_pa': results_f1_pa_k_50['pa_f1_scores'],
'f1_pa_90': None,
# 'results_f1_pa_k_90_th_w_pa': results_f1_pa_k_90['pa_f1_scores'],
# 'results_f1_pa_k_10_wpa': f1_score(labels, results_f1_pa_k_10),
# # 'results_f1_pa_k_10_th_w_pa': results_f1_pa_k_10['best_f1_th_w_pa'],
# 'results_f1_pa_k_50_wpa': f1_score(labels, results_f1_pa_k_50),
# # 'results_f1_pa_k_50_th_w_pa': results_f1_pa_k_50['best_f1_th_w_pa'],
# 'results_f1_pa_k_90_wpa': f1_score(labels, results_f1_pa_k_90),
# 'results_f1_pa_k_90_th_w_pa': results_f1_pa_k_90['best_f1_th_w_pa'],
}
else:
affiliation = pr_from_events(events_pred, events_gt, Trange)
vus_results = get_range_vus_roc(labels, pred, 100) # default slidingWindow = 100
pred_scores = np.asarray(res_log_socres, np.float64)[0]
# print("pred_scores.shape = ", pred_scores.shape, labels.shape)
# print("pred_scores.shape = ", pred_scores[:10])
# print("labels.shape = ", labels[:10])
results_f1_pa_k_10 = evaluate.evaluate(pred_scores, labels, k=10)
# results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
results_f1_pa_k_50 = evaluate.evaluate(pred_scores, labels, k=50)
results_f1_pa_k_90 = evaluate.evaluate(pred_scores, labels, k=90)
labels, pred = adjustment(labels, pred)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"],
'f1_pa_10': results_f1_pa_k_10['best_f1_w_pa'],
# 'results_f1_pa_k_10_th_w_pa': results_f1_pa_k_10['pa_f1_scores'],
'f1_pa_50': results_f1_pa_k_50['best_f1_w_pa'],
# 'results_f1_pa_k_50_th_w_pa': results_f1_pa_k_50['pa_f1_scores'],
'f1_pa_90': results_f1_pa_k_90['best_f1_w_pa'],
# 'results_f1_pa_k_90_th_w_pa': results_f1_pa_k_90['pa_f1_scores'],
# 'results_f1_pa_k_10_wpa': f1_score(labels, results_f1_pa_k_10),
# # 'results_f1_pa_k_10_th_w_pa': results_f1_pa_k_10['best_f1_th_w_pa'],
# 'results_f1_pa_k_50_wpa': f1_score(labels, results_f1_pa_k_50),
# # 'results_f1_pa_k_50_th_w_pa': results_f1_pa_k_50['best_f1_th_w_pa'],
# 'results_f1_pa_k_90_wpa': f1_score(labels, results_f1_pa_k_90),
# 'results_f1_pa_k_90_th_w_pa': results_f1_pa_k_90['best_f1_th_w_pa'],
}
else:
# pred_scores = np.asarray(res_log_socres, np.float64)
# print("pred_scores.shape = ", pred_scores.shape)
# results_f1_pa_k_10 = evaluate.evaluate(pred_scores, labels, k=10)
# # results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
# results_f1_pa_k_50 = evaluate.evaluate(pred_scores, labels, k=50)
# results_f1_pa_k_90 = evaluate.evaluate(pred_scores, labels, k=90)
eval_res = eval_ad_result(res_log, labels_log, timestamps_log, delay, pred_scores=res_log_socres)
eval_res['infer_time'] = t
return res_log, eval_res
def eval_anomaly_detection_coldstart(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay):
t = time.time()
all_data = {}
all_repr = {}
all_repr_wom = {}
for k in all_train_data:
all_data[k] = np.concatenate([all_train_data[k], all_test_data[k]])
all_repr[k] = model.encode(
all_data[k].reshape(1, -1, 1),
mask='mask_last',
casual=True,
sliding_length=1,
sliding_padding=200,
batch_size=256
).squeeze()
all_repr_wom[k] = model.encode(
all_data[k].reshape(1, -1, 1),
casual=True,
sliding_length=1,
sliding_padding=200,
batch_size=256
).squeeze()
res_log = []
labels_log = []
timestamps_log = []
for k in all_data:
data = all_data[k]
labels = np.concatenate([all_train_labels[k], all_test_labels[k]])
timestamps = np.concatenate([all_train_timestamps[k], all_test_timestamps[k]])
err = np.abs(all_repr_wom[k] - all_repr[k]).sum(axis=1)
ma = np_shift(bn.move_mean(err, 21), 1)
err_adj = (err - ma) / ma
MIN_WINDOW = len(data) // 10
thr = bn.move_mean(err_adj, len(err_adj), MIN_WINDOW) + 4 * bn.move_std(err_adj, len(err_adj), MIN_WINDOW)
res = (err_adj > thr) * 1
for i in range(len(res)):
if i >= delay and res[i-delay:i].sum() >= 1:
res[i] = 0
res_log.append(res[MIN_WINDOW:])
labels_log.append(labels[MIN_WINDOW:])
timestamps_log.append(timestamps[MIN_WINDOW:])
t = time.time() - t
eval_res = eval_ad_result(res_log, labels_log, timestamps_log, delay)
eval_res['infer_time'] = t
return res_log, eval_res
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train.py
================================================
import torch
import numpy as np
import argparse
import os
import sys
import time
import datetime
from ts2vec import TS2Vec
import tasks
import datautils
from utils import init_dl_program, name_with_datetime, pkl_save, data_dropout
def save_checkpoint_callback(
save_every=1,
unit='epoch'
):
assert unit in ('epoch', 'iter')
def callback(model, loss):
n = model.n_epochs if unit == 'epoch' else model.n_iters
if n % save_every == 0:
model.save(f'{run_dir}/model_{n}.pkl')
return callback
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('dataset', help='The dataset name')
parser.add_argument('run_name', help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
parser.add_argument('--loader', type=str, required=True, help='The data loader used to load the experimental data. This can be set to anomaly or anomaly_coldstart')
parser.add_argument('--gpu', type=int, default=0, help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch-size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--lr', type=float, default=0.001, help='The learning rate (defaults to 0.001)')
parser.add_argument('--repr-dims', type=int, default=320, help='The representation dimension (defaults to 320)')
parser.add_argument('--max-train-length', type=int, default=3000, help='For sequence with a length greater than , it would be cropped into some sequences, each of which has a length less than (defaults to 3000)')
parser.add_argument('--iters', type=int, default=None, help='The number of iterations')
parser.add_argument('--epochs', type=int, default=None, help='The number of epochs')
parser.add_argument('--save-every', type=int, default=None, help='Save the checkpoint every iterations/epochs')
parser.add_argument('--seed', type=int, default=None, help='The random seed')
parser.add_argument('--max-threads', type=int, default=None, help='The maximum allowed number of threads used by this process')
parser.add_argument('--eval', action="store_true", help='Whether to perform evaluation after training')
parser.add_argument('--irregular', type=float, default=0, help='The ratio of missing observations (defaults to 0)')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
device = init_dl_program(args.gpu, seed=args.seed, max_threads=args.max_threads)
print('Loading data... ', end='')
if args.loader == 'anomaly':
task_type = 'anomaly_detection'
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
train_data = datautils.gen_ano_train_data(all_train_data)
elif args.loader == 'anomaly_coldstart':
task_type = 'anomaly_detection_coldstart'
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
train_data, _, _, _ = datautils.load_UCR('FordA')
else:
raise ValueError(f"Unknown loader {args.loader}.")
if args.irregular > 0:
raise ValueError(f"Task type {task_type} is not supported when irregular>0.")
print('done')
config = dict(
batch_size=args.batch_size,
lr=args.lr,
output_dims=args.repr_dims,
max_train_length=args.max_train_length
)
if args.save_every is not None:
unit = 'epoch' if args.epochs is not None else 'iter'
config[f'after_{unit}_callback'] = save_checkpoint_callback(args.save_every, unit)
run_dir = 'training/' + args.dataset + '__' + name_with_datetime(args.run_name)
os.makedirs(run_dir, exist_ok=True)
t = time.time()
model = TS2Vec(
input_dims=train_data.shape[-1],
device=device,
**config
)
loss_log = model.fit(
train_data,
n_epochs=args.epochs,
n_iters=args.iters,
verbose=True
)
model.save(f'{run_dir}/model.pkl')
t = time.time() - t
print(f"\nTraining time: {datetime.timedelta(seconds=t)}")
print("Training time(seconds): ", t)
if args.eval:
if task_type == 'anomaly_detection':
out, eval_res = tasks.eval_anomaly_detection(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay)
elif task_type == 'anomaly_detection_coldstart':
out, eval_res = tasks.eval_anomaly_detection_coldstart(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay)
else:
assert False
pkl_save(f'{run_dir}/out.pkl', out)
pkl_save(f'{run_dir}/eval_res.pkl', eval_res)
print('Evaluation result:', eval_res)
print("Finished.")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/trainATbatch.py
================================================
import logging
import numpy as np
import torch
from torch.utils.data import DataLoader, TensorDataset, SequentialSampler
from utils import data_slice
import datautils
import pdb
from transformers.optimization import AdamW, get_cosine_schedule_with_warmup
from sklearn.metrics import f1_score
import tasks
from ATmodelbatch import AnomalyTransformer
import time
import bottleneck as bn
import argparse
import os
import pickle
from sklearn.metrics import f1_score, precision_score, recall_score
import bottleneck as bn
import pdb
# consider delay threshold and missing segments
def get_range_proba(predict, label, delay=7):
splits = np.where(label[1:] != label[:-1])[0] + 1
is_anomaly = label[0] == 1
new_predict = np.array(predict)
pos = 0
for sp in splits:
if is_anomaly:
if 1 in predict[pos:min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
is_anomaly = not is_anomaly
pos = sp
sp = len(label)
if is_anomaly: # anomaly in the end
if 1 in predict[pos: min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
return new_predict
if torch.cuda.is_available():
torch.set_default_tensor_type('torch.cuda.DoubleTensor')
else:
torch.set_default_tensor_type('torch.DoubleTensor')
logger = logging.getLogger(__name__)
class Config:
window_size = 100
shuffle = True
epochs = 3
warmup_ratio = 0.1
lr = 10e-4
adam_epsilon = 1e-6
batch_size = 32
in_channel = 1
dataset_name = "kpi"
d_model = 512
layers = 3
lambda_ = 3
save_dir = './save_models'
save_every_epoch = 2
is_train = False
is_eval = True
def train(config, model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels,
all_test_timestamps, delay):
# train_data = datautils.gen_ano_train_data(all_train_data)
train_data = all_train_data
config.in_channel = train_data.shape[-1]
train_data = data_slice(train_data, config.window_size)
train_data = torch.from_numpy(train_data)
if torch.cuda.is_available():
train_data = train_data.cuda()
train_dataset = TensorDataset(train_data)
train_dataloader = DataLoader(train_dataset, batch_size=min(config.batch_size, len(train_dataset)),
shuffle=config.shuffle, drop_last=True, generator=torch.Generator(device='cuda:0'))
total_steps = int(len(train_dataloader) * config.epochs)
warmup_steps = max(int(total_steps * config.warmup_ratio), 200)
optimizer = AdamW(
model.parameters(),
lr=config.lr,
eps=config.adam_epsilon,
)
scheduler = get_cosine_schedule_with_warmup(
optimizer, num_warmup_steps=warmup_steps, num_training_steps=total_steps
)
print("Total steps: {}".format(total_steps))
print("Warmup steps: {}".format(warmup_steps))
for epoch in range(int(config.epochs)):
print(epoch)
if (epoch + 1) % config.save_every_epoch == 0:
path = config.save_dir + '/' + model.to_string() + '_epoch:%d' % (epoch + 1)
os.makedirs(path, exist_ok=True)
torch.save(model, path + '/model.pt')
pdb.set_trace()
f1, pre, recall = evaluate(config, epoch + 1, model, all_train_data, all_train_labels, all_train_timestamps,
all_test_data, all_test_labels, all_test_timestamps, delay)
print('epoch:%d\tf1:%f\tp:%f\tr:%f' % (epoch + 1, f1, pre, recall))
model.zero_grad()
for step, batch in enumerate(train_dataloader):
batch = batch[0]
model(batch)
min_loss = model.min_loss(batch)
max_loss = model.max_loss(batch)
print('minloss:%f\tmaxloss:%f' % (min_loss.detach().cpu(),max_loss.detach().cpu()))
optimizer.zero_grad()
min_loss.backward(retain_graph=True)
max_loss.backward()
optimizer.step()
scheduler.step()
def np_shift(arr, num, fill_value=np.nan):
result = np.empty_like(arr)
if num > 0:
result[:num] = fill_value
result[num:] = arr[:-num]
elif num < 0:
result[num:] = fill_value
result[:num] = arr[-num:]
else:
result[:] = arr
return result
# set missing = 0
def reconstruct_label(timestamp, label):
timestamp = np.asarray(timestamp, np.int64)
index = np.argsort(timestamp)
timestamp_sorted = np.asarray(timestamp[index])
interval = np.min(np.diff(timestamp_sorted))
label = np.asarray(label, np.int64)
label = np.asarray(label[index])
idx = (timestamp_sorted - timestamp_sorted[0]) // interval
new_label = np.zeros(shape=((timestamp_sorted[-1] - timestamp_sorted[0]) // interval + 1,), dtype=np.int)
new_label[idx] = label
return new_label
def eval_ad_result(test_pred_list, test_labels_list, test_timestamps_list, delay):
labels = []
pred = []
for test_pred, test_labels, test_timestamps in zip(test_pred_list, test_labels_list, test_timestamps_list):
assert test_pred.shape == test_labels.shape == test_timestamps.shape
test_labels = reconstruct_label(test_timestamps, test_labels)
test_pred = reconstruct_label(test_timestamps, test_pred)
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
labels = np.concatenate(labels)
pred = np.concatenate(pred)
return {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred)
}
def evaluate(config, cur_epoch, model, all_train_data, all_train_labels, all_train_timestamps, all_test_data,
all_test_labels, all_test_timestamps, delay):
res_log = []
labels_log = []
timestamps_log = []
t = time.time()
for k in all_train_data:
print("k = ", k)
train_data = all_train_data[k]
train_labels = all_train_labels[k]
train_timestamps = all_train_timestamps[k]
train_length = train_labels.shape[0]
test_data = all_test_data[k]
test_labels = all_test_labels[k]
test_timestamps = all_test_timestamps[k]
test_length = test_labels.shape[0]
train_err = model.anomaly_score_whole(train_data).detach().cpu().numpy()
test_err = model.anomaly_score_whole(test_data).detach().cpu().numpy()
train_err = train_err[:train_length]
test_err = test_err[:test_length]
ma = np_shift(bn.move_mean(np.concatenate([train_err, test_err]), 21), 1)
train_err_adj = (train_err - ma[:len(train_err)]) / ma[:len(train_err)]
test_err_adj = (test_err - ma[len(train_err):]) / ma[len(train_err):]
train_err_adj = train_err_adj[22:]
thr = np.mean(train_err_adj) + 4 * np.std(train_err_adj)
test_res = (test_err_adj > thr) * 1
for i in range(len(test_res)):
if i >= delay and test_res[i - delay:i].sum() >= 1:
test_res[i] = 0
res_log.append(test_res)
labels_log.append(test_labels)
timestamps_log.append(test_timestamps)
break
t = time.time() - t
eval_res = eval_ad_result(res_log, labels_log, timestamps_log, delay)
eval_res['infer_time'] = t
'''
eval_res:{'f1':,'p':,'r':,}
'''
'''save_results'''
path = config.save_dir + '/' + model.to_string() + '_epoch:%d' % (cur_epoch)
os.makedirs(path, exist_ok=True)
with open(path + '/res_log.pkl', 'wb') as f:
pickle.dump(res_log, f)
with open(path + '/eval_res.pkl', 'wb') as f:
pickle.dump(eval_res, f)
with open(path + '/results.txt', 'w') as f:
f.write('f1:%f\tp:%f\tr:%f\n' % (eval_res['f1'], eval_res['precision'], eval_res['recall']))
return eval_res['f1'], eval_res['precision'], eval_res['recall']
def main(config):
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', default='kpi',
help='The dataset name, yahoo, kpi') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--is_multi', default=False, help='The dataset name, yahoo, kpi')
parser.add_argument('--datapath', default='./datasets/', help='')
parser.add_argument('--index', type=int, default=143, help='')
parser.add_argument('--batch_size', type=int, default=32, help='The batch size (defaults to 8)')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='anomaly_transformer_0719.csv')
args = parser.parse_args()
config.dataset_name = args.dataset
if args.is_multi:
from datasets.data_loader import get_loader_segment
data_path = args.datapath + args.dataset + '/'
print("data_path = ", data_path)
_, train_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100,
mode='train',
dataset=args.dataset)
all_train_data = train_data_loader.train
all_train_labels = None
all_train_timestamps = None
all_test_data = train_data_loader.test
all_test_labels = train_data_loader.test_labels
all_test_timestamps = None
delay = 5
print("all_train_data test_data, test_labels.shape = ", all_train_data.shape, all_test_data.shape,
all_test_labels.shape)
all_train_data = np.expand_dims(all_train_data, axis=0)
print("train_data.shape = ", all_train_data.shape)
print("Read Success!!!")
config.in_channel = all_train_data.shape[-1]
else:
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(
args.dataset)
# i = 1
# for k in all_test_data:
# print("i = ", i, ", k = ", k)
# print("all_train_data.shape = ", all_train_data[k].shape)
# print("all_train_labels.shape = ", all_train_labels[k].shape)
# print("all_train_timestamps.shape = ", all_train_timestamps[k].shape)
# print("all_test_data.shape = ", all_test_data[k].shape)
# print("all_test_labels.shape = ", all_test_labels[k].shape)
# print("all_test_timestamps.shape = ", all_test_timestamps[k].shape)
# i = i + 1
# if i > 2:
# break
# all_train_data = datautils.gen_ano_train_data(all_train_data)
# print("train_data.shape = ", all_train_data.shape)
# all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(
# config.dataset_name)
print('data loaded!')
model = AnomalyTransformer(config.batch_size, config.window_size, config.in_channel, config.d_model, config.layers,
config.lambda_)
model = model.cuda()
print('model builded!')
print('train start!')
if config.is_train:
model.train()
train(config, model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels,
all_test_timestamps, delay)
'''save_trained_model'''
path = config.save_dir + '/' + model.to_string() + '_epoch:%d' % (config.epochs)
os.makedirs(path, exist_ok=True)
torch.save(model, path + '/model.pt')
print('train finished! evaluating...')
if config.is_eval:
model.eval()
res_log, eval_res = evaluate(config, config.epochs, model, all_train_data, all_train_labels,
all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay)
print("res_log = ", res_log, ", eval_res = ", eval_res)
print('evaluate finished!')
if __name__ == "__main__":
config = Config()
main(config)
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_at_multi.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import os
import argparse
from torch.backends import cudnn
from other_anomaly_baselines.datasets.data_loader import get_loader_segment
from other_anomaly_baselines.AT_solver import Solver, mkdir
import torch.multiprocessing as mp
import numpy as np
# 更改共享策略
mp.set_sharing_strategy('file_system')
def str2bool(v):
return v.lower() in ('true')
def main(config, train_set, train_loader, val_set, val_loader, test_set, test_loader, dev_cuda):
cudnn.benchmark = True
if (not os.path.exists(config.model_save_path)):
mkdir(config.model_save_path)
solver = Solver(vars(config), train_set, train_loader, val_set, val_loader, test_set, test_loader, dev_cuda)
# if config.mode == 'train':
solver.train()
# elif config.mode == 'test':
eval_res = solver.test(ucr_index=config.index)
print("result_dict = ", eval_res)
eval_res['dataset'] = config.dataset + str(config.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = config.save_dir + config.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
return solver
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--lr', type=float, default=1e-4)
parser.add_argument('--num_epochs', type=int, default=3)
parser.add_argument('--k', type=int, default=3)
parser.add_argument('--win_size', type=int, default=100)
parser.add_argument('--input_c', type=int, default=38)
parser.add_argument('--output_c', type=int, default=38)
parser.add_argument('--batch_size', type=int, default=32)
parser.add_argument('--pretrained_model', type=str, default=None)
parser.add_argument('--dataset', type=str, default='UCR') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water, UCR
parser.add_argument('--mode', type=str, default='train', choices=['train', 'test'])
# parser.add_argument('--data_path', type=str, default='./dataset/creditcard_ts.csv')
parser.add_argument('--data_path', type=str, default='datasets/')
parser.add_argument('--model_save_path', type=str, default='checkpoints')
parser.add_argument('--anormly_ratio', type=float, default=0.9)
parser.add_argument('--index', type=int, default=143, help='')
parser.add_argument('--cuda', type=str, default='cuda:0')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='at_ucr_0727.csv')
config = parser.parse_args()
# 检查路径是否存在,如果不存在则赋值为新的路径
if not os.path.exists(config.save_dir):
config.save_dir = '/SSD/lz/tsm_ptms_anomaly_detection/result/'
print("save_dir = ", config.save_dir) # 输出检查
train_loader, train_set = get_loader_segment(config.index, config.data_path + config.dataset, batch_size=config.batch_size,
win_size=config.win_size, mode='train', dataset=config.dataset)
val_loader, val_set = get_loader_segment(config.index, config.data_path + config.dataset, batch_size=config.batch_size,
win_size=config.win_size, mode='val', dataset=config.dataset)
test_loader, test_set = get_loader_segment(config.index, config.data_path + config.dataset, batch_size=config.batch_size,
win_size=config.win_size, mode='test', dataset=config.dataset)
train_set = train_set.train
config.input_c = train_set.shape[-1]
config.output_c = train_set.shape[-1]
args = vars(config)
print('------------ Options -------------')
for k, v in sorted(args.items()):
print('%s: %s' % (str(k), str(v)))
print('-------------- End ----------------')
main(config, train_set, train_loader, val_set, val_loader, test_set, test_loader, config.cuda)
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_at_uni.py
================================================
import os
import sys
import numpy as np
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import os
import argparse
from torch.utils.data import TensorDataset, DataLoader
import torch
from torch.backends import cudnn
from other_anomaly_baselines.datasets.data_loader import get_loader_segment
from other_anomaly_baselines.AT_solver import Solver, mkdir
import datautils
import numpy as np
import torch.multiprocessing as mp
mp.set_sharing_strategy('file_system')
class UniLoader(object):
def __init__(self, data_set, win_size, step, mode="train"):
self.mode = mode
self.step = step
self.win_size = win_size
self.train = data_set
def __len__(self):
"""
Number of images in the object dataset.
"""
return (self.train.shape[0] - self.win_size) // self.step + 1
def __getitem__(self, index):
index = index * self.step
return np.float32(self.train[index:index + self.win_size])
def str2bool(v):
return v.lower() in ('true')
def main(config, train_set, train_loader, val_set, val_loader, test_set, test_loader, dev_cuda, all_train_data, all_test_data, all_test_labels, all_test_timestamps, delay, train_data):
cudnn.benchmark = True
if (not os.path.exists(config.model_save_path)):
mkdir(config.model_save_path)
for i in range(train_data.shape[0]):
print("i = ", i, ", total num = ", train_data.shape[0])
print("train_data.shape = ", train_data.shape)
_train_data = train_data[i]
print("000train_data.shape = ", train_data.shape, type(train_data))
_train_data = np.array(_train_data)
print("111_train_data.shape = ", _train_data.shape, type(_train_data))
train_dataset = UniLoader(_train_data, config.win_size, 1)
train_loader = DataLoader(dataset=train_dataset,
batch_size=config.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
solver = Solver(vars(config), train_dataset, train_loader, val_set, val_loader, test_set, test_loader, dev_cuda)
break
# if config.mode == 'train':
# for _uni_train_set in train_set:
# solver.train_uni()
# elif config.mode == 'test':
eval_res = solver.test_uni(all_train_data, all_test_data, all_test_labels, all_test_timestamps, delay, config)
print("result_dict = ", eval_res)
eval_res['dataset'] = config.dataset + str(config.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = config.save_dir + config.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
return solver
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--lr', type=float, default=1e-4)
parser.add_argument('--num_epochs', type=int, default=1)
parser.add_argument('--k', type=int, default=3)
parser.add_argument('--win_size', type=int, default=100)
parser.add_argument('--input_c', type=int, default=38)
parser.add_argument('--output_c', type=int, default=38)
parser.add_argument('--batch_size', type=int, default=32)
parser.add_argument('--pretrained_model', type=str, default=None)
parser.add_argument('--dataset', type=str, default='yahoo') ## kpi, yahoo
parser.add_argument('--mode', type=str, default='train', choices=['train', 'test'])
# parser.add_argument('--data_path', type=str, default='./dataset/creditcard_ts.csv')
parser.add_argument('--data_path', type=str, default='datasets/')
parser.add_argument('--model_save_path', type=str, default='checkpoints')
parser.add_argument('--anormly_ratio', type=float, default=1.0)
parser.add_argument('--index', type=int, default=143, help='')
parser.add_argument('--cuda', type=str, default='cuda:0')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='at_uni_0722.csv')
config = parser.parse_args()
# 检查路径是否存在,如果不存在则赋值为新的路径
if not os.path.exists(config.save_dir):
config.save_dir = '/SSD/lz/tsm_ptms_anomaly_detection/result/'
print("save_dir = ", config.save_dir) # 输出检查
dataset = 'MSL'
_train_loader, _train_set = get_loader_segment(config.index, config.data_path + dataset,
batch_size=config.batch_size,
win_size=config.win_size, mode='train', dataset=dataset)
_train_set = _train_set.train
print("_train_set.shape = ", _train_set.shape)
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(
config.dataset)
train_data = datautils.gen_ano_train_data(all_train_data)
print("train_data.shape = ", train_data.shape)
_train_data = train_data[0]
print("000train_data.shape = ", train_data.shape, type(train_data))
_train_data = np.array(_train_data)
print("111_train_data.shape = ", _train_data.shape, type(_train_data))
train_dataset = UniLoader(_train_data, config.win_size, 1)
train_loader = DataLoader(dataset=train_dataset,
batch_size=config.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
# train_dataset = TensorDataset(torch.from_numpy(train_data).to(torch.float))
# train_loader = DataLoader(train_dataset, batch_size=config.batch_size, shuffle=True,
# drop_last=True)
val_loader = train_loader
config.input_c = train_data.shape[-1]
config.output_c = train_data.shape[-1]
args = vars(config)
print('------------ Options -------------')
for k, v in sorted(args.items()):
print('%s: %s' % (str(k), str(v)))
print('-------------- End ----------------')
main(config, train_dataset, train_loader, train_dataset, val_loader, train_dataset, val_loader, config.cuda, all_train_data, all_test_data,
all_test_labels, all_test_timestamps, delay, train_data)
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_dcdetector.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import os
import argparse
import numpy as np
import torch
from torch.autograd import Variable
from torch.backends import cudnn
from other_anomaly_baselines.dcdetector_solver import Solver
import time
import warnings
import sys
from other_anomaly_baselines.datasets.data_loader import get_loader_segment
warnings.filterwarnings('ignore')
def to_var(x, volatile=False):
if torch.cuda.is_available():
x = x.cuda()
return Variable(x, volatile=volatile)
def mkdir(directory):
if not os.path.exists(directory):
os.makedirs(directory)
class Logger(object):
def __init__(self, filename='default.log', add_flag=True, stream=sys.stdout):
self.terminal = stream
self.filename = filename
self.add_flag = add_flag
def write(self, message):
if self.add_flag:
with open(self.filename, 'a+') as log:
self.terminal.write(message)
log.write(message)
else:
with open(self.filename, 'w') as log:
self.terminal.write(message)
log.write(message)
def flush(self):
pass
def str2bool(v):
return v.lower() in ('true')
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return int(array[idx - 1])
def main(config):
cudnn.benchmark = True
if (not os.path.exists(config.model_save_path)):
mkdir(config.model_save_path)
solver = Solver(vars(config))
solver.train()
result_dict = solver.test(ucr_index=config.index)
# if config.mode == 'train':
# solver.train()
# elif config.mode == 'test':
# solver.test()
return result_dict
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Alternative
parser.add_argument('--win_size', type=int, default=100)
parser.add_argument('--patch_size', type=list, default=[5])
parser.add_argument('--lr', type=float, default=1e-4)
parser.add_argument('--loss_fuc', type=str, default='MSE')
parser.add_argument('--n_heads', type=int, default=1)
parser.add_argument('--e_layers', type=int, default=3)
parser.add_argument('--d_model', type=int, default=256)
parser.add_argument('--rec_timeseries', action='store_true', default=True)
parser.add_argument('--use_gpu', type=bool, default=True, help='use gpu')
parser.add_argument('--gpu', type=int, default=1, help='gpu')
parser.add_argument('--use_multi_gpu', action='store_true', help='use multiple gpus', default=False)
parser.add_argument('--devices', type=str, default='0,1,2,3', help='device ids of multile gpus')
# Default
parser.add_argument('--index', type=int, default=71)
parser.add_argument('--num_epochs', type=int, default=3)
parser.add_argument('--batch_size', type=int, default=128)
parser.add_argument('--input_c', type=int, default=1)
parser.add_argument('--output_c', type=int, default=1)
parser.add_argument('--k', type=int, default=3)
parser.add_argument('--dataset', type=str, default='UCR') ## NIPS_TS_Swan SMD ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--mode', type=str, default='train', choices=['train', 'test'])
parser.add_argument('--data_path', type=str, default='datasets/')
parser.add_argument('--model_save_path', type=str, default='checkpoints')
parser.add_argument('--anormly_ratio', type=float, default=1.00)
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='dcdetector_ucr_0728.csv')
config = parser.parse_args()
args = vars(config)
config.patch_size = [int(patch_index) for patch_index in config.patch_size]
if config.dataset == 'UCR':
batch_size_buffer = [2, 4, 8, 16, 32, 64, 128, 256]
data_len = np.load(config.data_path + config.dataset + "/UCR_" + str(config.index) + "_train.npy").shape[0] ## './datasets/' +
config.batch_size = find_nearest(batch_size_buffer, data_len / config.win_size)
elif config.dataset == 'UCR_AUG':
batch_size_buffer = [2, 4, 8, 16, 32, 64, 128, 256]
data_len = np.load('./datasets/' + config.data_path + "/UCR_AUG_" + str(config.index) + "_train.npy").shape[0]
config.batch_size = find_nearest(batch_size_buffer, data_len / config.win_size)
elif config.dataset == 'SMD_Ori':
batch_size_buffer = [2, 4, 8, 16, 32, 64, 128, 256, 512]
data_len = np.load('./datasets/' + config.data_path + "/SMD_Ori_" + str(config.index) + "_train.npy").shape[0]
config.batch_size = find_nearest(batch_size_buffer, data_len / config.win_size)
config.use_gpu = True if torch.cuda.is_available() and config.use_gpu else False
if config.use_gpu and config.use_multi_gpu:
config.devices = config.devices.replace(' ', '')
device_ids = config.devices.split(',')
config.device_ids = [int(id_) for id_ in device_ids]
config.gpu = config.device_ids[0]
sys.stdout = Logger("./result_log/" + config.dataset + ".log", sys.stdout)
if config.mode == 'train':
print("\n\n")
print(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
print('================ Hyperparameters ===============')
for k, v in sorted(args.items()):
print('%s: %s' % (str(k), str(v)))
print('==================== Train ===================')
train_loader, train_set = get_loader_segment(config.index, config.data_path + config.dataset, batch_size=config.batch_size,
win_size=config.win_size, mode='train', dataset=config.dataset)
train_set = train_set.train
print("train_set.shape = ", train_set.shape)
config.input_c = train_set.shape[-1]
config.output_c = train_set.shape[-1]
eval_res = main(config)
print("result_dict = ", eval_res)
eval_res['dataset'] = config.dataset + str(config.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = config.save_dir + config.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_dcdetector_nui.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import os
import argparse
import numpy as np
import torch
from torch.autograd import Variable
from torch.backends import cudnn
from other_anomaly_baselines.dcdetector_solver import Solver
import time
import warnings
import sys
from other_anomaly_baselines.datasets.data_loader import get_loader_segment
import datautils
from torch.utils.data import TensorDataset, DataLoader
import torch.multiprocessing as mp
mp.set_sharing_strategy('file_system')
warnings.filterwarnings('ignore')
class UniLoader(object):
def __init__(self, data_set, win_size, step, mode="train"):
self.mode = mode
self.step = step
self.win_size = win_size
self.train = data_set
def __len__(self):
"""
Number of images in the object dataset.
"""
return (self.train.shape[0] - self.win_size) // self.step + 1
def __getitem__(self, index):
index = index * self.step
return np.float32(self.train[index:index + self.win_size])
def to_var(x, volatile=False):
if torch.cuda.is_available():
x = x.cuda()
return Variable(x, volatile=volatile)
def mkdir(directory):
if not os.path.exists(directory):
os.makedirs(directory)
class Logger(object):
def __init__(self, filename='default.log', add_flag=True, stream=sys.stdout):
self.terminal = stream
self.filename = filename
self.add_flag = add_flag
def write(self, message):
if self.add_flag:
with open(self.filename, 'a+') as log:
self.terminal.write(message)
log.write(message)
else:
with open(self.filename, 'w') as log:
self.terminal.write(message)
log.write(message)
def flush(self):
pass
def str2bool(v):
return v.lower() in ('true')
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return int(array[idx - 1])
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Alternative
parser.add_argument('--win_size', type=int, default=100)
parser.add_argument('--patch_size', type=list, default=[5])
parser.add_argument('--lr', type=float, default=1e-4)
parser.add_argument('--loss_fuc', type=str, default='MSE')
parser.add_argument('--n_heads', type=int, default=1)
parser.add_argument('--e_layers', type=int, default=3)
parser.add_argument('--d_model', type=int, default=256)
parser.add_argument('--rec_timeseries', action='store_true', default=True)
parser.add_argument('--use_gpu', type=bool, default=True, help='use gpu')
parser.add_argument('--gpu', type=int, default=1, help='gpu')
parser.add_argument('--use_multi_gpu', action='store_true', help='use multiple gpus', default=False)
parser.add_argument('--devices', type=str, default='0,1,2,3', help='device ids of multile gpus')
# Default
parser.add_argument('--index', type=int, default=137)
parser.add_argument('--num_epochs', type=int, default=3)
parser.add_argument('--batch_size', type=int, default=8)
parser.add_argument('--input_c', type=int, default=1)
parser.add_argument('--output_c', type=int, default=1)
parser.add_argument('--k', type=int, default=3)
# parser.add_argument('--dataset', type=str, default='NIPS_TS_Swan') ## NIPS_TS_Swan SMD
parser.add_argument('--dataset', type=str, default='yahoo') ## kpi, yahoo
parser.add_argument('--mode', type=str, default='train', choices=['train', 'test'])
parser.add_argument('--data_path', type=str, default='datasets/')
parser.add_argument('--model_save_path', type=str, default='checkpoints')
parser.add_argument('--anormly_ratio', type=float, default=1.00)
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='dcdetector_uni_0722.csv')
config = parser.parse_args()
args = vars(config)
config.patch_size = [int(patch_index) for patch_index in config.patch_size]
# 检查路径是否存在,如果不存在则赋值为新的路径
if not os.path.exists(config.save_dir):
config.save_dir = '/SSD/lz/tsm_ptms_anomaly_detection/result/'
print("save_dir = ", config.save_dir) # 输出检查
# if config.dataset == 'UCR':
# batch_size_buffer = [2, 4, 8, 16, 32, 64, 128, 256]
# data_len = np.load(config.data_path + config.dataset + "/UCR_" + str(config.index) + "_train.npy").shape[0] ## './datasets/' +
# config.batch_size = find_nearest(batch_size_buffer, data_len / config.win_size)
# elif config.dataset == 'UCR_AUG':
# batch_size_buffer = [2, 4, 8, 16, 32, 64, 128, 256]
# data_len = np.load('./datasets/' + config.data_path + "/UCR_AUG_" + str(config.index) + "_train.npy").shape[0]
# config.batch_size = find_nearest(batch_size_buffer, data_len / config.win_size)
# elif config.dataset == 'SMD_Ori':
# batch_size_buffer = [2, 4, 8, 16, 32, 64, 128, 256, 512]
# data_len = np.load('./datasets/' + config.data_path + "/SMD_Ori_" + str(config.index) + "_train.npy").shape[0]
# config.batch_size = find_nearest(batch_size_buffer, data_len / config.win_size)
config.use_gpu = True if torch.cuda.is_available() and config.use_gpu else False
if config.use_gpu and config.use_multi_gpu:
config.devices = config.devices.replace(' ', '')
device_ids = config.devices.split(',')
config.device_ids = [int(id_) for id_ in device_ids]
config.gpu = config.device_ids[0]
sys.stdout = Logger("./result_log/" + config.dataset + ".log", sys.stdout)
if config.mode == 'train':
print("\n\n")
print(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
print('================ Hyperparameters ===============')
for k, v in sorted(args.items()):
print('%s: %s' % (str(k), str(v)))
print('==================== Train ===================')
# train_loader, train_set = get_loader_segment(config.index, config.data_path + config.dataset, batch_size=config.batch_size,
# win_size=config.win_size, mode='train', dataset=config.dataset)
#
# train_set = train_set.train
#
#
# print("train_set.shape = ", train_set.shape)
# config.input_c = train_set.shape[-1]
# config.output_c = train_set.shape[-1]
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(
config.dataset)
train_data = datautils.gen_ano_train_data(all_train_data)
print("train_data.shape = ", train_data.shape)
_train_data = train_data[0]
print("000train_data.shape = ", train_data.shape, type(train_data))
_train_data = np.array(_train_data)
print("111_train_data.shape = ", _train_data.shape, type(_train_data))
train_dataset = UniLoader(_train_data, config.win_size, 1)
train_loader = DataLoader(dataset=train_dataset,
batch_size=config.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
val_loader = train_loader
config.input_c = train_data.shape[-1]
config.output_c = train_data.shape[-1]
cudnn.benchmark = True
if (not os.path.exists(config.model_save_path)):
mkdir(config.model_save_path)
solver = Solver(vars(config))
solver.train_loader = train_loader
solver.vali_loader = val_loader
solver.test_loader = val_loader
solver.thre_loader = val_loader
solver.train_uni()
eval_res = solver.test_uni(all_train_data, all_test_data, all_test_labels, all_test_timestamps, delay, config)
print("result_dict = ", eval_res)
eval_res['dataset'] = config.dataset + str(config.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = config.save_dir + config.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_donut.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import torch
import numpy as np
import argparse
import os
import sys
import time
import datetime
from donut import DONUT
import tasks
import datautils
from utils import init_dl_program, name_with_datetime, pkl_save, data_dropout
def save_checkpoint_callback(
save_every=1,
unit='epoch'
):
assert unit in ('epoch', 'iter')
def callback(model, loss):
n = model.n_epochs if unit == 'epoch' else model.n_iters
if n % save_every == 0:
model.save(f'{run_dir}/model_{n}.pkl')
return callback
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', default='kpi',
help='The dataset name, yahoo, kpi') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--is_multi', default=False, help='The dataset name, yahoo, kpi')
parser.add_argument('--datapath', default='./datasets/', help='')
parser.add_argument('--index', type=int, default=143, help='')
parser.add_argument('--run_name', default='donut', help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
# parser.add_argument('--loader', type=str, required=True, help='The data loader used to load the experimental data--anomaly')
parser.add_argument('--loader', type=str, default='anomaly',
help='The data loader used to load the experimental data--anomaly')
parser.add_argument('--gpu', type=int, default=0, help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch-size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--lr', type=float, default=0.001, help='The learning rate (defaults to 0.001)')
parser.add_argument('--latent_dim', type=int, default=100, help='The units of the hidden layer.')
parser.add_argument('--hidden_dim', type=int, default=3, help='The dims of the hidden representation (z).')
parser.add_argument('--z_kld_weight', type=float, default=1)
parser.add_argument('--x_kld_weight', type=float, default=1)
parser.add_argument('--max-train-length', type=int, default=3000, help='For sequence with a length greater than , it would be cropped into some sequences, each of which has a length less than (defaults to 3000)')
parser.add_argument('--iters', type=int, default=None, help='The number of iterations')
parser.add_argument('--epochs', type=int, default=None, help='The number of epochs')
parser.add_argument('--save-every', type=int, default=None, help='Save the checkpoint every iterations/epochs')
parser.add_argument('--seed', type=int, default=None, help='The random seed')
parser.add_argument('--max-threads', type=int, default=None, help='The maximum allowed number of threads used by this process')
parser.add_argument('--eval', default=True, help='Whether to perform evaluation after training')
parser.add_argument('--irregular', type=float, default=0, help='The ratio of missing observations (defaults to 0)')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='donut_uni_0723.csv')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
device = init_dl_program(args.gpu, seed=args.seed, max_threads=args.max_threads)
print('Loading data... ', end='')
if args.loader == 'anomaly':
task_type = 'anomaly_detection'
if args.is_multi:
from datasets.data_loader import get_loader_segment
data_path = args.datapath + args.dataset + '/'
print("data_path = ", data_path)
_, train_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100,
mode='train',
dataset=args.dataset)
all_train_data = train_data_loader.train
all_train_labels = None
all_train_timestamps = None
all_test_data = train_data_loader.test
all_test_labels = train_data_loader.test_labels
all_test_timestamps = None
delay = 5
print("all_train_data test_data, test_labels.shape = ", all_train_data.shape, all_test_data.shape,
all_test_labels.shape)
train_data = np.expand_dims(all_train_data, axis=0)
print("train_data.shape = ", train_data.shape)
print("Read Success!!!")
else:
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
train_data = datautils.gen_ano_train_data(all_train_data)
else:
raise ValueError(f"Unknown loader {args.loader}.")
if args.irregular > 0:
raise ValueError(f"Task type {task_type} is not supported when irregular>0.")
print('done')
config = dict(
batch_size=args.batch_size,
lr=args.lr,
latent_dim=args.latent_dim,
hidden_dim=args.hidden_dim,
z_kld_weight=args.z_kld_weight,
x_kld_weight=args.x_kld_weight,
max_train_length=args.max_train_length
)
if args.save_every is not None:
unit = 'epoch' if args.epochs is not None else 'iter'
config[f'after_{unit}_callback'] = save_checkpoint_callback(args.save_every, unit)
run_dir = 'training/' + args.dataset + '__' + name_with_datetime(args.run_name)
os.makedirs(run_dir, exist_ok=True)
t = time.time()
model = DONUT(
input_dims=train_data.shape[-1],
device=device,
**config
)
loss_log = model.train(
train_data,
n_epochs=args.epochs,
n_iters=args.iters,
verbose=True
)
model.save(f'{run_dir}/model.pkl')
t = time.time() - t
print(f"\nTraining time: {datetime.timedelta(seconds=t)}")
print("Training time(seconds): ", t)
if args.eval:
if task_type == 'anomaly_detection':
out, eval_res = model.evaluate(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay, is_multi=args.is_multi)
else:
assert False
pkl_save(f'{run_dir}/out.pkl', out)
pkl_save(f'{run_dir}/eval_res.pkl', eval_res)
print('Evaluation result:', eval_res)
eval_res['dataset'] = args.dataset + str(args.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
print("Finished.")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_donut_multi.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import torch
import numpy as np
import argparse
import os
import sys
import time
import datetime
from donut import DONUT
import tasks
import datautils
from utils import init_dl_program, name_with_datetime, pkl_save, data_dropout
def save_checkpoint_callback(
save_every=1,
unit='epoch'
):
assert unit in ('epoch', 'iter')
def callback(model, loss):
n = model.n_epochs if unit == 'epoch' else model.n_iters
if n % save_every == 0:
model.save(f'{run_dir}/model_{n}.pkl')
return callback
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', default='PSM',
help='The dataset name, yahoo, kpi') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--is_multi', default=True, help='The dataset name, yahoo, kpi')
parser.add_argument('--datapath', default='./datasets/', help='')
parser.add_argument('--index', type=int, default=203, help='')
parser.add_argument('--run_name', default='donut',
help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
# parser.add_argument('--loader', type=str, required=True, help='The data loader used to load the experimental data--anomaly')
parser.add_argument('--loader', type=str, default='anomaly',
help='The data loader used to load the experimental data--anomaly')
parser.add_argument('--gpu', type=int, default=0,
help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch-size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--lr', type=float, default=0.001, help='The learning rate (defaults to 0.001)')
parser.add_argument('--latent_dim', type=int, default=100, help='The units of the hidden layer.')
parser.add_argument('--hidden_dim', type=int, default=3, help='The dims of the hidden representation (z).')
parser.add_argument('--z_kld_weight', type=float, default=1)
parser.add_argument('--x_kld_weight', type=float, default=1)
parser.add_argument('--max-train-length', type=int, default=3000,
help='For sequence with a length greater than , it would be cropped into some sequences, each of which has a length less than (defaults to 3000)')
parser.add_argument('--iters', type=int, default=None, help='The number of iterations')
parser.add_argument('--epochs', type=int, default=None, help='The number of epochs')
parser.add_argument('--save-every', type=int, default=None,
help='Save the checkpoint every iterations/epochs')
parser.add_argument('--seed', type=int, default=None, help='The random seed')
parser.add_argument('--max-threads', type=int, default=None,
help='The maximum allowed number of threads used by this process')
parser.add_argument('--eval', default=True, help='Whether to perform evaluation after training')
parser.add_argument('--irregular', type=float, default=0, help='The ratio of missing observations (defaults to 0)')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='donut_ucr_0727.csv')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
# 检查路径是否存在,如果不存在则赋值为新的路径
if not os.path.exists(args.save_dir):
args.save_dir = '/SSD/lz/tsm_ptms_anomaly_detection/result/'
print("save_dir = ", args.save_dir) # 输出检查
device = init_dl_program(args.gpu, seed=args.seed, max_threads=args.max_threads)
print('Loading data... ', end='')
if args.loader == 'anomaly':
task_type = 'anomaly_detection'
if args.is_multi:
from datasets.data_loader import get_loader_segment
data_path = args.datapath + args.dataset + '/'
print("data_path = ", data_path)
_, train_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100,
mode='train',
dataset=args.dataset)
all_train_data = train_data_loader.train
all_train_labels = None
all_train_timestamps = None
all_test_data = train_data_loader.test
all_test_labels = train_data_loader.test_labels
all_test_timestamps = None
delay = 5
print("all_train_data test_data, test_labels.shape = ", all_train_data.shape, all_test_data.shape,
all_test_labels.shape)
train_data = np.expand_dims(all_train_data, axis=0)
print("train_data.shape = ", train_data.shape)
print("Read Success!!!")
else:
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(
args.dataset)
train_data = datautils.gen_ano_train_data(all_train_data)
else:
raise ValueError(f"Unknown loader {args.loader}.")
if args.irregular > 0:
raise ValueError(f"Task type {task_type} is not supported when irregular>0.")
print('done')
config = dict(
batch_size=args.batch_size,
lr=args.lr,
latent_dim=args.latent_dim,
hidden_dim=args.hidden_dim,
z_kld_weight=args.z_kld_weight,
x_kld_weight=args.x_kld_weight,
max_train_length=args.max_train_length
)
if args.save_every is not None:
unit = 'epoch' if args.epochs is not None else 'iter'
config[f'after_{unit}_callback'] = save_checkpoint_callback(args.save_every, unit)
run_dir = 'training/' + args.dataset + '__' + name_with_datetime(args.run_name)
os.makedirs(run_dir, exist_ok=True)
t = time.time()
model = DONUT(
input_dims=train_data.shape[-1],
device=device,
**config
)
loss_log = model.train(
train_data,
n_epochs=args.epochs,
n_iters=args.iters,
verbose=True
)
model.save(f'{run_dir}/model.pkl')
t = time.time() - t
print(f"\nTraining time: {datetime.timedelta(seconds=t)}")
print("Training time(seconds): ", t)
if args.eval:
if task_type == 'anomaly_detection':
out, eval_res = model.evaluate(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data,
all_test_labels, all_test_timestamps, delay, is_multi=args.is_multi, ucr_index=args.index)
else:
assert False
pkl_save(f'{run_dir}/out.pkl', out)
pkl_save(f'{run_dir}/eval_res.pkl', eval_res)
print('Evaluation result:', eval_res)
eval_res['dataset'] = args.dataset + str(args.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
print("Finished.")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_dspot.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import numpy as np
import matplotlib.pyplot as plt
from spot import dSPOT
import numpy as np
import time
import datetime
import datautils
from sklearn.metrics import f1_score, precision_score, recall_score
import argparse
from sklearn.metrics import f1_score, precision_score, recall_score
from other_anomaly_baselines.metrics.affiliation.metrics import pr_from_events
from other_anomaly_baselines.metrics.vus.metrics import get_range_vus_roc
from other_anomaly_baselines.metrics.affiliation.generics import convert_vector_to_events
from tadpak import evaluate
def adjustment(gt, pred):
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
return gt, pred
# consider delay threshold and missing segments
def get_range_proba(predict, label, delay=7):
splits = np.where(label[1:] != label[:-1])[0] + 1
is_anomaly = label[0] == 1
new_predict = np.array(predict)
pos = 0
for sp in splits:
if is_anomaly:
if 1 in predict[pos:min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
is_anomaly = not is_anomaly
pos = sp
sp = len(label)
if is_anomaly: # anomaly in the end
if 1 in predict[pos: min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
return new_predict
parser = argparse.ArgumentParser()
# parser.add_argument('dataset', help='The dataset name')
# parser.add_argument('run_name', help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
# parser.add_argument('--dataset', default='kpi', help='The dataset name, yahoo, kpi')
parser.add_argument('--dataset', default='kpi',
help='The dataset name, yahoo, kpi') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--is_multi', default=False, help='The dataset name, yahoo, kpi')
parser.add_argument('--datapath', default='./datasets/', help='')
parser.add_argument('--gpu', type=int, default=0, help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--index', type=int, default=143, help='')
parser.add_argument('--batch_size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='dspot_0719.csv')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
if args.is_multi:
from datasets.data_loader import get_loader_segment
data_path = args.datapath + args.dataset + '/'
print("data_path = ", data_path)
_, train_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100,
mode='train',
dataset=args.dataset)
all_train_data = train_data_loader.train
all_train_labels = None
all_train_timestamps = None
all_test_data = train_data_loader.test
all_test_labels = train_data_loader.test_labels
all_test_timestamps = None
delay = 5
all_train_data = np.squeeze(all_train_data)
all_test_data = np.squeeze(all_test_data)
print("all_train_data test_data, test_labels.shape = ", all_train_data.shape, all_test_data.shape,
all_test_labels.shape)
# train_data = np.expand_dims(all_train_data, axis=0)
# print("train_data.shape = ", train_data.shape)
print("Read Success!!!")
else:
# dataset = 'kpi' # yahoo, kpi
print('Loading data... ', end='')
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
labels = []
pred = []
scores = []
if args.is_multi:
train_data = all_train_data # initial batch
train_labels = all_train_labels
test_data = all_test_data # stream
test_labels = all_test_labels
test_timestamps = all_test_timestamps
q = 1e-4 # risk parameter # yahoo: 1e-3
d = 50 # depth
s = dSPOT(q, d) # DSPOT object
s.fit(train_data, test_data) # data import
s.initialize() # initialization step
results = s.run() # run
test_thresholds = results['thresholds']
idx_anoamly = results['alarms']
test_pred = np.zeros(len(test_thresholds))
test_pred[idx_anoamly] = 1
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
scores.append(results['scores'])
else:
for k in all_test_data:
train_data = all_train_data[k] # initial batch
train_labels = all_train_labels[k]
train_timestamps = all_train_timestamps[k]
test_data = all_test_data[k] # stream
test_labels = all_test_labels[k]
test_timestamps = all_test_timestamps[k]
q = 1e-4 # risk parameter # yahoo: 1e-3
d = 50 # depth
s = dSPOT(q, d) # DSPOT object
s.fit(train_data, test_data) # data import
s.initialize() # initialization step
results = s.run() # run
test_thresholds = results['thresholds']
idx_anoamly = results['alarms']
test_pred = np.zeros(len(test_thresholds))
test_pred[idx_anoamly] = 1
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
scores.append(results['scores'])
labels = np.concatenate(labels)
pred = np.concatenate(pred)
scores = np.concatenate(scores)
if args.is_multi:
# labels = np.asarray(labels_log, np.int64)[0]
# pred = np.asarray(res_log, np.int64)[0]
# print("labels.shape = ", labels.shape, labels[:5])
# print("pred.shape = ", pred.shape, pred[:5])
labels, pred = adjustment(labels, pred)
events_pred = convert_vector_to_events(pred)
events_gt = convert_vector_to_events(labels)
Trange = (0, len(labels))
affiliation = pr_from_events(events_pred, events_gt, Trange)
vus_results = get_range_vus_roc(labels, pred, 100) # default slidingWindow = 100
pred_scores = scores
results_f1_pa_k_10 = evaluate.evaluate(pred_scores, labels, k=10)
# results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
results_f1_pa_k_50 = evaluate.evaluate(pred_scores, labels, k=50)
results_f1_pa_k_90 = evaluate.evaluate(pred_scores, labels, k=90)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"],
'f1_pa_10': results_f1_pa_k_10['best_f1_w_pa'],
'f1_pa_50': results_f1_pa_k_50['best_f1_w_pa'],
'f1_pa_90': results_f1_pa_k_90['best_f1_w_pa'],
}
print("eval_res = ", eval_res)
else:
print('\nf1:', f1_score(labels, pred))
print('precision:', precision_score(labels, pred))
print('recall:', recall_score(labels, pred))
events_pred = convert_vector_to_events(pred)
events_gt = convert_vector_to_events(labels)
Trange = (0, len(labels))
affiliation = pr_from_events(events_pred, events_gt, Trange)
vus_results = get_range_vus_roc(labels, pred, 100) # default slidingWindow = 100
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"]
}
# results_f1_pa_k_10 = evaluate.evaluate(scores, labels, k=10)
# # results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
# results_f1_pa_k_50 = evaluate.evaluate(scores, labels, k=50)
# results_f1_pa_k_90 = evaluate.evaluate(scores, labels, k=90)
#
# eval_res['f1_pa_10'] = results_f1_pa_k_10['best_f1_w_pa']
# eval_res['f1_pa_50'] = results_f1_pa_k_50['best_f1_w_pa']
# eval_res['f1_pa_90'] = results_f1_pa_k_90['best_f1_w_pa']
eval_res['dataset'] = args.dataset + str(args.index)
import pandas as pd
import os
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
print("Finished.")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_dspot_multi.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import numpy as np
import matplotlib.pyplot as plt
from spot import dSPOT
import numpy as np
import time
import datetime
import datautils
from sklearn.metrics import f1_score, precision_score, recall_score
import argparse
from sklearn.metrics import f1_score, precision_score, recall_score
from other_anomaly_baselines.metrics.affiliation.metrics import pr_from_events
from other_anomaly_baselines.metrics.vus.metrics import get_range_vus_roc
from other_anomaly_baselines.metrics.affiliation.generics import convert_vector_to_events
from tadpak import evaluate
import torch.multiprocessing as mp
import numpy as np
# 更改共享策略
mp.set_sharing_strategy('file_system')
def adjustment(gt, pred):
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
return gt, pred
# consider delay threshold and missing segments
def get_range_proba(predict, label, delay=7):
splits = np.where(label[1:] != label[:-1])[0] + 1
is_anomaly = label[0] == 1
new_predict = np.array(predict)
pos = 0
for sp in splits:
if is_anomaly:
if 1 in predict[pos:min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
is_anomaly = not is_anomaly
pos = sp
sp = len(label)
if is_anomaly: # anomaly in the end
if 1 in predict[pos: min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
return new_predict
parser = argparse.ArgumentParser()
# parser.add_argument('dataset', help='The dataset name')
# parser.add_argument('run_name', help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
# parser.add_argument('--dataset', default='kpi', help='The dataset name, yahoo, kpi')
parser.add_argument('--dataset', default='UCR',
help='The dataset name, yahoo, kpi') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--is_multi', default=True, help='The dataset name, yahoo, kpi')
parser.add_argument('--datapath', default='./datasets/', help='')
parser.add_argument('--gpu', type=int, default=0, help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--index', type=int, default=203, help='') ## [79, 108, 187, 203]
parser.add_argument('--batch_size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='dspot_ucr_0727.csv')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
if args.is_multi:
from datasets.data_loader import get_loader_segment
data_path = args.datapath + args.dataset + '/'
print("data_path = ", data_path)
_, train_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100,
mode='train',
dataset=args.dataset)
all_train_data = train_data_loader.train
all_train_labels = None
all_train_timestamps = None
all_test_data = train_data_loader.test
all_test_labels = train_data_loader.test_labels
all_test_timestamps = None
delay = 5
all_train_data = np.squeeze(all_train_data)
all_test_data = np.squeeze(all_test_data)
print("all_train_data test_data, test_labels.shape = ", all_train_data.shape, all_test_data.shape,
all_test_labels.shape)
# train_data = np.expand_dims(all_train_data, axis=0)
# print("train_data.shape = ", train_data.shape)
print("Read Success!!!")
else:
# dataset = 'kpi' # yahoo, kpi
print('Loading data... ', end='')
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
labels = []
pred = []
scores = []
if args.is_multi:
train_data = all_train_data # initial batch
train_labels = all_train_labels
test_data = all_test_data # stream
test_labels = all_test_labels
test_timestamps = all_test_timestamps
q = 1e-4 # risk parameter # yahoo: 1e-3
d = 50 # depth
s = dSPOT(q, d) # DSPOT object
s.fit(train_data, test_data) # data import
s.initialize() # initialization step
results = s.run() # run
test_thresholds = results['thresholds']
idx_anoamly = results['alarms']
test_pred = np.zeros(len(test_thresholds))
test_pred[idx_anoamly] = 1
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
scores.append(results['scores'])
else:
for k in all_test_data:
train_data = all_train_data[k] # initial batch
train_labels = all_train_labels[k]
train_timestamps = all_train_timestamps[k]
test_data = all_test_data[k] # stream
test_labels = all_test_labels[k]
test_timestamps = all_test_timestamps[k]
q = 1e-4 # risk parameter # yahoo: 1e-3
d = 50 # depth
s = dSPOT(q, d) # DSPOT object
s.fit(train_data, test_data) # data import
s.initialize() # initialization step
results = s.run() # run
test_thresholds = results['thresholds']
idx_anoamly = results['alarms']
test_pred = np.zeros(len(test_thresholds))
test_pred[idx_anoamly] = 1
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
scores.append(results['scores'])
labels = np.concatenate(labels)
pred = np.concatenate(pred)
scores = np.concatenate(scores)
if args.is_multi:
# labels = np.asarray(labels_log, np.int64)[0]
# pred = np.asarray(res_log, np.int64)[0]
# print("labels.shape = ", labels.shape, labels[:5])
# print("pred.shape = ", pred.shape, pred[:5])
if args.index == 79 or args.index == 108 or args.index == 187 or args.index == 203:
labels, pred = adjustment(labels, pred)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": None,
"Affiliation recall": None,
"R_AUC_ROC": None,
"R_AUC_PR": None,
"VUS_ROC": None,
"VUS_PR": None,
'f1_pa_10': None,
'f1_pa_50': None,
'f1_pa_90': None,
}
else:
events_pred = convert_vector_to_events(pred)
events_gt = convert_vector_to_events(labels)
Trange = (0, len(labels))
affiliation = pr_from_events(events_pred, events_gt, Trange)
vus_results = get_range_vus_roc(labels, pred, 100) # default slidingWindow = 100
pred_scores = scores
results_f1_pa_k_10 = evaluate.evaluate(pred_scores, labels, k=10)
# results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
results_f1_pa_k_50 = evaluate.evaluate(pred_scores, labels, k=50)
results_f1_pa_k_90 = evaluate.evaluate(pred_scores, labels, k=90)
labels, pred = adjustment(labels, pred)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"],
'f1_pa_10': results_f1_pa_k_10['best_f1_w_pa'],
'f1_pa_50': results_f1_pa_k_50['best_f1_w_pa'],
'f1_pa_90': results_f1_pa_k_90['best_f1_w_pa'],
}
print("eval_res = ", eval_res)
else:
print('\nf1:', f1_score(labels, pred))
print('precision:', precision_score(labels, pred))
print('recall:', recall_score(labels, pred))
events_pred = convert_vector_to_events(pred)
events_gt = convert_vector_to_events(labels)
Trange = (0, len(labels))
affiliation = pr_from_events(events_pred, events_gt, Trange)
vus_results = get_range_vus_roc(labels, pred, 100) # default slidingWindow = 100
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"]
}
results_f1_pa_k_10 = evaluate.evaluate(scores, labels, k=10)
# results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
results_f1_pa_k_50 = evaluate.evaluate(scores, labels, k=50)
results_f1_pa_k_90 = evaluate.evaluate(scores, labels, k=90)
eval_res['f1_pa_10'] = results_f1_pa_k_10['best_f1_w_pa']
eval_res['f1_pa_50'] = results_f1_pa_k_50['best_f1_w_pa']
eval_res['f1_pa_90'] = results_f1_pa_k_90['best_f1_w_pa']
eval_res['dataset'] = args.dataset + str(args.index)
import pandas as pd
import os
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
print("Finished.")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_gpt4ts.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import argparse
import os
import torch
from other_anomaly_baselines.exp_anomaly_detection import Exp_Anomaly_Detection
from other_anomaly_baselines.datasets.data_loader import get_loader_segment
import random
import numpy as np
if __name__ == '__main__':
fix_seed = 42
random.seed(fix_seed)
torch.manual_seed(fix_seed)
np.random.seed(fix_seed)
parser = argparse.ArgumentParser(description='GPT4TS')
# basic config
parser.add_argument('--task_name', type=str, default='anomaly_detection',
help='task name, options:[long_term_forecast, short_term_forecast, imputation, classification, anomaly_detection]')
parser.add_argument('--is_training', type=int, default=1, help='status')
parser.add_argument('--model_id', type=str, default='test', help='model id')
parser.add_argument('--model', type=str, default='GPT4TS',
help='model name, options: [Autoformer, Transformer, TimesNet]')
# data loader
parser.add_argument('--data', type=str, default='UCR', help='dataset type') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--root_path', type=str, default='./data/ETT/', help='root path of the data file')
# parser.add_argument('--data_path', type=str, default='ETTh1.csv', help='data file')
parser.add_argument('--features', type=str, default='M',
help='forecasting task, options:[M, S, MS]; M:multivariate predict multivariate, S:univariate predict univariate, MS:multivariate predict univariate')
parser.add_argument('--target', type=str, default='OT', help='target feature in S or MS task')
parser.add_argument('--freq', type=str, default='h',
help='freq for time features encoding, options:[s:secondly, t:minutely, h:hourly, d:daily, b:business days, w:weekly, m:monthly], you can also use more detailed freq like 15min or 3h')
parser.add_argument('--checkpoints', type=str, default='./checkpoints/', help='location of model checkpoints')
# forecasting task
parser.add_argument('--seq_len', type=int, default=100, help='input sequence length')
parser.add_argument('--label_len', type=int, default=48, help='start token length')
parser.add_argument('--pred_len', type=int, default=0, help='prediction sequence length')
parser.add_argument('--seasonal_patterns', type=str, default='Monthly', help='subset for M4')
parser.add_argument('--inverse', action='store_true', help='inverse output data', default=False)
# inputation task
parser.add_argument('--mask_rate', type=float, default=0.25, help='mask ratio')
# anomaly detection task
parser.add_argument('--anomaly_ratio', type=float, default=0.5, help='prior anomaly ratio (%)')
# model define
parser.add_argument('--top_k', type=int, default=3, help='for TimesBlock')
parser.add_argument('--num_kernels', type=int, default=6, help='for Inception')
parser.add_argument('--enc_in', type=int, default=1, help='encoder input size') ## 55 for MSL, 38 for SMD, SMAP for 25, PSM for 25, SWAT for 51, NIPS_TS_Swan for 38,
parser.add_argument('--dec_in', type=int, default=7, help='decoder input size') ## NIPS_TS_Water for 38, UCR for 1
parser.add_argument('--c_out', type=int, default=1, help='output size')
parser.add_argument('--d_model', type=int, default=8, help='dimension of model')
parser.add_argument('--n_heads', type=int, default=8, help='num of heads')
parser.add_argument('--e_layers', type=int, default=1, help='num of encoder layers')
parser.add_argument('--d_layers', type=int, default=1, help='num of decoder layers')
parser.add_argument('--d_ff', type=int, default=16, help='dimension of fcn')
parser.add_argument('--moving_avg', type=int, default=25, help='window size of moving average')
parser.add_argument('--factor', type=int, default=1, help='attn factor')
parser.add_argument('--distil', action='store_false',
help='whether to use distilling in encoder, using this argument means not using distilling',
default=True)
parser.add_argument('--dropout', type=float, default=0.1, help='dropout')
parser.add_argument('--embed', type=str, default='timeF',
help='time features encoding, options:[timeF, fixed, learned]')
parser.add_argument('--activation', type=str, default='gelu', help='activation')
parser.add_argument('--output_attention', action='store_true', help='whether to output attention in ecoder')
# optimization
parser.add_argument('--num_workers', type=int, default=10, help='data loader num workers')
parser.add_argument('--itr', type=int, default=1, help='experiments times')
parser.add_argument('--train_epochs', type=int, default=3, help='train epochs')
parser.add_argument('--batch_size', type=int, default=32, help='batch size of train input data')
parser.add_argument('--patience', type=int, default=3, help='early stopping patience')
parser.add_argument('--learning_rate', type=float, default=0.0001, help='optimizer learning rate')
parser.add_argument('--des', type=str, default='test', help='exp description')
parser.add_argument('--loss', type=str, default='MSE', help='loss function')
parser.add_argument('--lradj', type=str, default='type1', help='adjust learning rate')
parser.add_argument('--use_amp', action='store_true', help='use automatic mixed precision training', default=False)
# GPU
parser.add_argument('--use_gpu', type=bool, default=True, help='use gpu')
parser.add_argument('--gpu', type=int, default=0, help='gpu')
parser.add_argument('--use_multi_gpu', action='store_true', help='use multiple gpus', default=False)
parser.add_argument('--devices', type=str, default='0,1,2,3', help='device ids of multile gpus')
# de-stationary projector params
parser.add_argument('--p_hidden_dims', type=int, nargs='+', default=[128, 128],
help='hidden layer dimensions of projector (List)')
parser.add_argument('--p_hidden_layers', type=int, default=2, help='number of hidden layers in projector')
# patching
parser.add_argument('--patch_size', type=int, default=1)
parser.add_argument('--stride', type=int, default=1)
parser.add_argument('--gpt_layers', type=int, default=6)
parser.add_argument('--ln', type=int, default=0)
parser.add_argument('--mlp', type=int, default=0)
parser.add_argument('--weight', type=float, default=0)
parser.add_argument('--percent', type=int, default=5)
# Default
parser.add_argument('--index', type=int, default=79)
parser.add_argument('--data_path', type=str, default='datasets/')
parser.add_argument('--win_size', type=int, default=100)
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='gpt4ts_ucr_0727.csv')
args = parser.parse_args()
args.use_gpu = True if torch.cuda.is_available() and args.use_gpu else False
# 检查路径是否存在,如果不存在则赋值为新的路径
if not os.path.exists(args.save_dir):
args.save_dir = '/SSD/lz/tsm_ptms_anomaly_detection/result/'
print("save_dir = ", args.save_dir) # 输出检查
if args.use_gpu and args.use_multi_gpu:
args.devices = args.devices.replace(' ', '')
device_ids = args.devices.split(',')
args.device_ids = [int(id_) for id_ in device_ids]
args.gpu = args.device_ids[0]
print('Args in experiment:')
print(args)
Exp = Exp_Anomaly_Detection
train_loader, train_set = get_loader_segment(args.index, args.data_path + args.data, batch_size=args.batch_size,
win_size=args.win_size, mode='train', dataset=args.data)
val_loader, val_set = get_loader_segment(args.index, args.data_path + args.data, batch_size=args.batch_size,
win_size=args.win_size, mode='val', dataset=args.data)
test_loader, test_set = get_loader_segment(args.index, args.data_path + args.data, batch_size=args.batch_size,
win_size=args.win_size, mode='test', dataset=args.data)
train_set = train_set.train
val_set = val_set.val
test_set = test_set.test
print("train_set.shape = ", train_set.shape, ", test_set.shape = ", test_set.shape, test_set.shape[-1])
args.enc_in = train_set.shape[-1]
args.c_out = train_set.shape[-1]
if args.is_training:
for ii in range(args.itr):
# setting record of experiments
setting = '{}_{}_{}_{}_ft{}_sl{}_ll{}_pl{}_dm{}_nh{}_el{}_dl{}_df{}_fc{}_eb{}_dt{}_{}_{}'.format(
args.task_name,
args.model_id,
args.model,
args.data,
args.features,
args.seq_len,
args.label_len,
args.pred_len,
args.d_model,
args.n_heads,
args.e_layers,
args.d_layers,
args.d_ff,
args.factor,
args.embed,
args.distil,
args.des, ii)
exp = Exp(args, train_set, train_loader, val_set, val_loader, test_set, test_loader) # set experiments
print('>>>>>>>start training : {}>>>>>>>>>>>>>>>>>>>>>>>>>>'.format(setting))
exp.train(setting)
print('>>>>>>>testing : {}<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<'.format(setting))
eval_res = exp.test(setting, dataset=args.data, ucr_index=args.index)
torch.cuda.empty_cache()
print("result_dict = ", eval_res)
eval_res['dataset'] = args.data + str(args.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
else:
ii = 0
setting = '{}_{}_{}_{}_ft{}_sl{}_ll{}_pl{}_dm{}_nh{}_el{}_dl{}_df{}_fc{}_eb{}_dt{}_{}_{}'.format(
args.task_name,
args.model_id,
args.model,
args.data,
args.features,
args.seq_len,
args.label_len,
args.pred_len,
args.d_model,
args.n_heads,
args.e_layers,
args.d_layers,
args.d_ff,
args.factor,
args.embed,
args.distil,
args.des, ii)
exp = Exp(args) # set experiments
print('>>>>>>>testing : {}<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<'.format(setting))
exp.test(setting, test=1)
torch.cuda.empty_cache()
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_gpt4ts_uni.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import argparse
import os
import torch
from torch.utils.data import TensorDataset, DataLoader
from other_anomaly_baselines.exp_anomaly_detection import Exp_Anomaly_Detection
from other_anomaly_baselines.datasets.data_loader import get_loader_segment
import datautils
import random
import numpy as np
class UniLoader(object):
def __init__(self, data_set, win_size, step, mode="train"):
self.mode = mode
self.step = step
self.win_size = win_size
self.train = data_set
def __len__(self):
"""
Number of images in the object dataset.
"""
return (self.train.shape[0] - self.win_size) // self.step + 1
def __getitem__(self, index):
index = index * self.step
return np.float32(self.train[index:index + self.win_size])
if __name__ == '__main__':
fix_seed = 42
random.seed(fix_seed)
torch.manual_seed(fix_seed)
np.random.seed(fix_seed)
parser = argparse.ArgumentParser(description='GPT4TS')
# basic config
parser.add_argument('--task_name', type=str, default='anomaly_detection',
help='task name, options:[long_term_forecast, short_term_forecast, imputation, classification, anomaly_detection]')
parser.add_argument('--is_training', type=int, default=1, help='status')
parser.add_argument('--model_id', type=str, default='test', help='model id')
parser.add_argument('--model', type=str, default='GPT4TS',
help='model name, options: [Autoformer, Transformer, TimesNet]')
# data loader
# parser.add_argument('--data', type=str, default='UCR', help='dataset type') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--data', type=str, default='kpi') ## kpi, yahoo
parser.add_argument('--root_path', type=str, default='./data/ETT/', help='root path of the data file')
# parser.add_argument('--data_path', type=str, default='ETTh1.csv', help='data file')
parser.add_argument('--features', type=str, default='M',
help='forecasting task, options:[M, S, MS]; M:multivariate predict multivariate, S:univariate predict univariate, MS:multivariate predict univariate')
parser.add_argument('--target', type=str, default='OT', help='target feature in S or MS task')
parser.add_argument('--freq', type=str, default='h',
help='freq for time features encoding, options:[s:secondly, t:minutely, h:hourly, d:daily, b:business days, w:weekly, m:monthly], you can also use more detailed freq like 15min or 3h')
parser.add_argument('--checkpoints', type=str, default='./checkpoints/', help='location of model checkpoints')
# forecasting task
parser.add_argument('--seq_len', type=int, default=100, help='input sequence length')
parser.add_argument('--label_len', type=int, default=48, help='start token length')
parser.add_argument('--pred_len', type=int, default=0, help='prediction sequence length')
parser.add_argument('--seasonal_patterns', type=str, default='Monthly', help='subset for M4')
parser.add_argument('--inverse', action='store_true', help='inverse output data', default=False)
# inputation task
parser.add_argument('--mask_rate', type=float, default=0.25, help='mask ratio')
# anomaly detection task
parser.add_argument('--anomaly_ratio', type=float, default=1, help='prior anomaly ratio (%)')
# model define
parser.add_argument('--top_k', type=int, default=3, help='for TimesBlock')
parser.add_argument('--num_kernels', type=int, default=6, help='for Inception')
parser.add_argument('--enc_in', type=int, default=1, help='encoder input size') ## 55 for MSL, 38 for SMD, SMAP for 25, PSM for 25, SWAT for 51, NIPS_TS_Swan for 38,
parser.add_argument('--dec_in', type=int, default=7, help='decoder input size') ## NIPS_TS_Water for 38, UCR for 1
parser.add_argument('--c_out', type=int, default=1, help='output size')
parser.add_argument('--d_model', type=int, default=8, help='dimension of model')
parser.add_argument('--n_heads', type=int, default=8, help='num of heads')
parser.add_argument('--e_layers', type=int, default=1, help='num of encoder layers')
parser.add_argument('--d_layers', type=int, default=1, help='num of decoder layers')
parser.add_argument('--d_ff', type=int, default=16, help='dimension of fcn')
parser.add_argument('--moving_avg', type=int, default=25, help='window size of moving average')
parser.add_argument('--factor', type=int, default=1, help='attn factor')
parser.add_argument('--distil', action='store_false',
help='whether to use distilling in encoder, using this argument means not using distilling',
default=True)
parser.add_argument('--dropout', type=float, default=0.1, help='dropout')
parser.add_argument('--embed', type=str, default='timeF',
help='time features encoding, options:[timeF, fixed, learned]')
parser.add_argument('--activation', type=str, default='gelu', help='activation')
parser.add_argument('--output_attention', action='store_true', help='whether to output attention in ecoder')
# optimization
parser.add_argument('--num_workers', type=int, default=10, help='data loader num workers')
parser.add_argument('--itr', type=int, default=1, help='experiments times')
parser.add_argument('--train_epochs', type=int, default=1, help='train epochs')
parser.add_argument('--batch_size', type=int, default=8, help='batch size of train input data')
parser.add_argument('--patience', type=int, default=3, help='early stopping patience')
parser.add_argument('--learning_rate', type=float, default=0.0001, help='optimizer learning rate')
parser.add_argument('--des', type=str, default='test', help='exp description')
parser.add_argument('--loss', type=str, default='MSE', help='loss function')
parser.add_argument('--lradj', type=str, default='type1', help='adjust learning rate')
parser.add_argument('--use_amp', action='store_true', help='use automatic mixed precision training', default=False)
# GPU
parser.add_argument('--use_gpu', type=bool, default=True, help='use gpu')
parser.add_argument('--gpu', type=int, default=0, help='gpu')
parser.add_argument('--use_multi_gpu', action='store_true', help='use multiple gpus', default=False)
parser.add_argument('--devices', type=str, default='0,1,2,3', help='device ids of multile gpus')
# de-stationary projector params
parser.add_argument('--p_hidden_dims', type=int, nargs='+', default=[128, 128],
help='hidden layer dimensions of projector (List)')
parser.add_argument('--p_hidden_layers', type=int, default=2, help='number of hidden layers in projector')
# patching
parser.add_argument('--patch_size', type=int, default=1)
parser.add_argument('--stride', type=int, default=1)
parser.add_argument('--gpt_layers', type=int, default=6)
parser.add_argument('--ln', type=int, default=0)
parser.add_argument('--mlp', type=int, default=0)
parser.add_argument('--weight', type=float, default=0)
parser.add_argument('--percent', type=int, default=5)
# Default
parser.add_argument('--index', type=int, default=137)
parser.add_argument('--data_path', type=str, default='datasets/')
parser.add_argument('--win_size', type=int, default=100)
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='gpt4ts_uni_0717.csv')
args = parser.parse_args()
args.use_gpu = True if torch.cuda.is_available() and args.use_gpu else False
# 检查路径是否存在,如果不存在则赋值为新的路径
if not os.path.exists(args.save_dir):
args.save_dir = '/SSD/lz/tsm_ptms_anomaly_detection/result/'
print("save_dir = ", args.save_dir) # 输出检查
if args.use_gpu and args.use_multi_gpu:
args.devices = args.devices.replace(' ', '')
device_ids = args.devices.split(',')
args.device_ids = [int(id_) for id_ in device_ids]
args.gpu = args.device_ids[0]
print('Args in experiment:')
print(args)
Exp = Exp_Anomaly_Detection
# dataset = 'MSL'
# _train_loader, _train_set = get_loader_segment(args.index, args.data_path + dataset,
# batch_size=args.batch_size,
# win_size=args.win_size, mode='train', dataset=dataset)
#
# _train_set = _train_set.train
#
# print("_train_set.shape = ", _train_set.shape)
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(
args.data)
train_data = datautils.gen_ano_train_data(all_train_data)
print("train_data.shape = ", train_data.shape)
_train_data = train_data[0]
print("000train_data.shape = ", train_data.shape, type(train_data))
_train_data = np.array(_train_data)
print("111_train_data.shape = ", _train_data.shape, type(_train_data))
train_dataset = UniLoader(_train_data, args.win_size, 1)
train_loader = DataLoader(dataset=train_dataset,
batch_size=args.batch_size,
shuffle=True,
num_workers=8,
drop_last=True)
val_loader = train_loader
args.input_c = train_data.shape[-1]
args.output_c = train_data.shape[-1]
# train_loader, train_set = get_loader_segment(args.index, args.data_path + args.data, batch_size=args.batch_size,
# win_size=args.win_size, mode='train', dataset=args.data)
# val_loader, val_set = get_loader_segment(args.index, args.data_path + args.data, batch_size=args.batch_size,
# win_size=args.win_size, mode='val', dataset=args.data)
# test_loader, test_set = get_loader_segment(args.index, args.data_path + args.data, batch_size=args.batch_size,
# win_size=args.win_size, mode='test', dataset=args.data)
# train_set = train_set.train
# val_set = val_set.val
# test_set = test_set.test
print("train_set.shape = ", _train_data.shape)
args.enc_in = _train_data.shape[-1]
args.c_out = _train_data.shape[-1]
if args.is_training:
for ii in range(args.itr):
# setting record of experiments
setting = '{}_{}_{}_{}_ft{}_sl{}_ll{}_pl{}_dm{}_nh{}_el{}_dl{}_df{}_fc{}_eb{}_dt{}_{}_{}'.format(
args.task_name,
args.model_id,
args.model,
args.data,
args.features,
args.seq_len,
args.label_len,
args.pred_len,
args.d_model,
args.n_heads,
args.e_layers,
args.d_layers,
args.d_ff,
args.factor,
args.embed,
args.distil,
args.des, ii)
exp = Exp(args, _train_data, train_loader, _train_data, train_loader, _train_data, train_loader) # set experiments
print('>>>>>>>start training : {}>>>>>>>>>>>>>>>>>>>>>>>>>>'.format(setting))
for i in range(train_data.shape[0]):
print("i = ", i, ", total num = ", train_data.shape[0])
print("train_data.shape = ", train_data.shape)
_train_data = train_data[i]
print("000train_data.shape = ", train_data.shape, type(train_data))
_train_data = np.array(_train_data)
print("111_train_data.shape = ", _train_data.shape, type(_train_data))
train_dataset = UniLoader(_train_data, args.win_size, 1)
train_loader = DataLoader(dataset=train_dataset,
batch_size=args.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
exp.train_loader = train_loader
exp.train_set = _train_data
exp.val_loader = train_loader
exp.val_set = _train_data
exp.train_uni(setting)
print('>>>>>>>testing : {}<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<'.format(setting))
eval_res = exp.test_uni(setting, all_train_data, all_test_data, all_test_labels, all_test_timestamps, delay, args)
torch.cuda.empty_cache()
print("result_dict = ", eval_res)
eval_res['dataset'] = args.data + str(args.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
else:
ii = 0
setting = '{}_{}_{}_{}_ft{}_sl{}_ll{}_pl{}_dm{}_nh{}_el{}_dl{}_df{}_fc{}_eb{}_dt{}_{}_{}'.format(
args.task_name,
args.model_id,
args.model,
args.data,
args.features,
args.seq_len,
args.label_len,
args.pred_len,
args.d_model,
args.n_heads,
args.e_layers,
args.d_layers,
args.d_ff,
args.factor,
args.embed,
args.distil,
args.des, ii)
exp = Exp(args) # set experiments
print('>>>>>>>testing : {}<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<'.format(setting))
exp.test(setting, test=1)
torch.cuda.empty_cache()
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_lstm_vae.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import torch
import numpy as np
import argparse
import os
import sys
import time
import datetime
from lstm_vae import LSTM_VAE
import tasks
import datautils
from utils import init_dl_program, name_with_datetime, pkl_save, data_dropout
def save_checkpoint_callback(
save_every=1,
unit='epoch'
):
assert unit in ('epoch', 'iter')
def callback(model, loss):
n = model.n_epochs if unit == 'epoch' else model.n_iters
if n % save_every == 0:
model.save(f'{run_dir}/model_{n}.pkl')
return callback
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# parser.add_argument('dataset', help='The dataset name')
# parser.add_argument('run_name', help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
# parser.add_argument('--dataset', default='kpi', help='The dataset name, yahoo, kpi')
parser.add_argument('--dataset', default='kpi',
help='The dataset name, yahoo, kpi') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--is_multi', default=False, help='The dataset name, yahoo, kpi')
parser.add_argument('--datapath', default='./datasets/', help='')
parser.add_argument('--index', type=int, default=143, help='')
parser.add_argument('--run_name', default='lstm-vae',
help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
parser.add_argument('--loader', type=str, default='anomaly', help='The data loader used to load the experimental data--anomaly')
parser.add_argument('--gpu', type=int, default=1, help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch_size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--lr', type=float, default=0.001, help='The learning rate (defaults to 0.001)')
parser.add_argument('--hidden_size', type=int, default=16, help='The units of the LSTM hidden layer.')
parser.add_argument('--hidden_dim', type=int, default=3, help='The dims of the hidden representation (z).')
parser.add_argument('--z_kld_weight', type=float, default=1)
parser.add_argument('--x_kld_weight', type=float, default=1)
parser.add_argument('--max-train-length', type=int, default=3000, help='For sequence with a length greater than , it would be cropped into some sequences, each of which has a length less than (defaults to 3000)')
parser.add_argument('--iters', type=int, default=None, help='The number of iterations')
parser.add_argument('--epochs', type=int, default=None, help='The number of epochs')
parser.add_argument('--save-every', type=int, default=None, help='Save the checkpoint every iterations/epochs')
parser.add_argument('--seed', type=int, default=None, help='The random seed')
parser.add_argument('--max-threads', type=int, default=None, help='The maximum allowed number of threads used by this process')
parser.add_argument('--eval', default=True, help='Whether to perform evaluation after training')
parser.add_argument('--irregular', type=float, default=0, help='The ratio of missing observations (defaults to 0)')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='lstm_vae_uni_0723.csv')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
device = init_dl_program(args.gpu, seed=args.seed, max_threads=args.max_threads)
print('Loading data... ', end='')
if args.loader == 'anomaly':
task_type = 'anomaly_detection'
if args.is_multi:
from datasets.data_loader import get_loader_segment
data_path = args.datapath + args.dataset + '/'
print("data_path = ", data_path)
_, train_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100,
mode='train',
dataset=args.dataset)
all_train_data = train_data_loader.train
all_train_labels = None
all_train_timestamps = None
all_test_data = train_data_loader.test
all_test_labels = train_data_loader.test_labels
all_test_timestamps = None
delay = 5
print("all_train_data test_data, test_labels.shape = ", all_train_data.shape, all_test_data.shape,
all_test_labels.shape)
train_data = np.expand_dims(all_train_data, axis=0)
print("train_data.shape = ", train_data.shape)
print("Read Success!!!")
else:
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
# i = 1
# for k in all_test_data:
# print("i = ", i, ", k = ", k)
# print("all_train_data.shape = ", all_train_data[k].shape)
# print("all_train_labels.shape = ", all_train_labels[k].shape)
# print("all_train_timestamps.shape = ", all_train_timestamps[k].shape)
# print("all_test_data.shape = ", all_test_data[k].shape)
# print("all_test_labels.shape = ", all_test_labels[k].shape)
# print("all_test_timestamps.shape = ", all_test_timestamps[k].shape)
# i = i + 1
# if i > 2:
# break
train_data = datautils.gen_ano_train_data(all_train_data)
print("train_data.shape = ", train_data.shape)
else:
raise ValueError(f"Unknown loader {args.loader}.")
if args.irregular > 0:
raise ValueError(f"Task type {task_type} is not supported when irregular>0.")
print('done')
config = dict(
batch_size=args.batch_size,
lr=args.lr,
hidden_size=args.hidden_size,
hidden_dim=args.hidden_dim,
z_kld_weight=args.z_kld_weight,
x_kld_weight=args.x_kld_weight,
max_train_length=args.max_train_length
)
if args.save_every is not None:
unit = 'epoch' if args.epochs is not None else 'iter'
config[f'after_{unit}_callback'] = save_checkpoint_callback(args.save_every, unit)
run_dir = 'training/' + args.dataset + '__' + name_with_datetime(args.run_name)
os.makedirs(run_dir, exist_ok=True)
t = time.time()
print("train_data.shape = ", train_data.shape)
model = LSTM_VAE(
input_dims=train_data.shape[-1],
device=device,
**config
)
loss_log = model.train(
train_data,
n_epochs=args.epochs,
n_iters=args.iters,
verbose=True
)
model.save(f'{run_dir}/model.pkl')
t = time.time() - t
print(f"\nTraining time: {datetime.timedelta(seconds=t)}")
print("Training time(seconds): ", t)
if args.eval:
if task_type == 'anomaly_detection':
out, eval_res = model.evaluate(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay, is_multi=args.is_multi)
else:
assert False
pkl_save(f'{run_dir}/out.pkl', out)
pkl_save(f'{run_dir}/eval_res.pkl', eval_res)
print('Evaluation result:', eval_res)
eval_res['dataset'] = args.dataset + str(args.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
print("Finished.")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_lstm_vae_multi.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import torch
import numpy as np
import argparse
import os
import sys
import time
import datetime
from lstm_vae import LSTM_VAE
import tasks
import datautils
from utils import init_dl_program, name_with_datetime, pkl_save, data_dropout
def save_checkpoint_callback(
save_every=1,
unit='epoch'
):
assert unit in ('epoch', 'iter')
def callback(model, loss):
n = model.n_epochs if unit == 'epoch' else model.n_iters
if n % save_every == 0:
model.save(f'{run_dir}/model_{n}.pkl')
return callback
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# parser.add_argument('dataset', help='The dataset name')
# parser.add_argument('run_name', help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
# parser.add_argument('--dataset', default='kpi', help='The dataset name, yahoo, kpi')
parser.add_argument('--dataset', default='PSM',
help='The dataset name, yahoo, kpi') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--is_multi', default=True, help='The dataset name, yahoo, kpi')
parser.add_argument('--datapath', default='./datasets/', help='')
parser.add_argument('--index', type=int, default=203, help='') ## [79, 108, 187, 203]
parser.add_argument('--run_name', default='lstm-vae',
help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
parser.add_argument('--loader', type=str, default='anomaly',
help='The data loader used to load the experimental data--anomaly')
parser.add_argument('--gpu', type=int, default=1,
help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch_size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--lr', type=float, default=0.001, help='The learning rate (defaults to 0.001)')
parser.add_argument('--hidden_size', type=int, default=16, help='The units of the LSTM hidden layer.')
parser.add_argument('--hidden_dim', type=int, default=3, help='The dims of the hidden representation (z).')
parser.add_argument('--z_kld_weight', type=float, default=1)
parser.add_argument('--x_kld_weight', type=float, default=1)
parser.add_argument('--max-train-length', type=int, default=3000,
help='For sequence with a length greater than , it would be cropped into some sequences, each of which has a length less than (defaults to 3000)')
parser.add_argument('--iters', type=int, default=None, help='The number of iterations')
parser.add_argument('--epochs', type=int, default=None, help='The number of epochs')
parser.add_argument('--save-every', type=int, default=None,
help='Save the checkpoint every iterations/epochs')
parser.add_argument('--seed', type=int, default=None, help='The random seed')
parser.add_argument('--max-threads', type=int, default=None,
help='The maximum allowed number of threads used by this process')
parser.add_argument('--eval', default=True, help='Whether to perform evaluation after training')
parser.add_argument('--irregular', type=float, default=0, help='The ratio of missing observations (defaults to 0)')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='lstm_vae_ucr_0727.csv')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
# 检查路径是否存在,如果不存在则赋值为新的路径
if not os.path.exists(args.save_dir):
args.save_dir = '/SSD/lz/tsm_ptms_anomaly_detection/result/'
print("save_dir = ", args.save_dir) # 输出检查
device = init_dl_program(args.gpu, seed=args.seed, max_threads=args.max_threads)
print('Loading data... ', end='')
if args.loader == 'anomaly':
task_type = 'anomaly_detection'
if args.is_multi:
from datasets.data_loader import get_loader_segment
data_path = args.datapath + args.dataset + '/'
print("data_path = ", data_path)
_, train_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100,
mode='train',
dataset=args.dataset)
all_train_data = train_data_loader.train
all_train_labels = None
all_train_timestamps = None
all_test_data = train_data_loader.test
all_test_labels = train_data_loader.test_labels
all_test_timestamps = None
delay = 5
print("all_train_data test_data, test_labels.shape = ", all_train_data.shape, all_test_data.shape,
all_test_labels.shape)
train_data = np.expand_dims(all_train_data, axis=0)
print("train_data.shape = ", train_data.shape)
print("Read Success!!!")
else:
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(
args.dataset)
# i = 1
# for k in all_test_data:
# print("i = ", i, ", k = ", k)
# print("all_train_data.shape = ", all_train_data[k].shape)
# print("all_train_labels.shape = ", all_train_labels[k].shape)
# print("all_train_timestamps.shape = ", all_train_timestamps[k].shape)
# print("all_test_data.shape = ", all_test_data[k].shape)
# print("all_test_labels.shape = ", all_test_labels[k].shape)
# print("all_test_timestamps.shape = ", all_test_timestamps[k].shape)
# i = i + 1
# if i > 2:
# break
train_data = datautils.gen_ano_train_data(all_train_data)
print("train_data.shape = ", train_data.shape)
else:
raise ValueError(f"Unknown loader {args.loader}.")
if args.irregular > 0:
raise ValueError(f"Task type {task_type} is not supported when irregular>0.")
print('done')
config = dict(
batch_size=args.batch_size,
lr=args.lr,
hidden_size=args.hidden_size,
hidden_dim=args.hidden_dim,
z_kld_weight=args.z_kld_weight,
x_kld_weight=args.x_kld_weight,
max_train_length=args.max_train_length
)
if args.save_every is not None:
unit = 'epoch' if args.epochs is not None else 'iter'
config[f'after_{unit}_callback'] = save_checkpoint_callback(args.save_every, unit)
run_dir = 'training/' + args.dataset + '__' + name_with_datetime(args.run_name)
os.makedirs(run_dir, exist_ok=True)
t = time.time()
print("train_data.shape = ", train_data.shape)
model = LSTM_VAE(
input_dims=train_data.shape[-1],
device=device,
**config
)
loss_log = model.train(
train_data,
n_epochs=args.epochs,
n_iters=args.iters,
verbose=True
)
model.save(f'{run_dir}/model.pkl')
t = time.time() - t
print(f"\nTraining time: {datetime.timedelta(seconds=t)}")
print("Training time(seconds): ", t)
if args.eval:
if task_type == 'anomaly_detection':
out, eval_res = model.evaluate(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data,
all_test_labels, all_test_timestamps, delay, is_multi=args.is_multi, ucr_index=args.index)
else:
assert False
pkl_save(f'{run_dir}/out.pkl', out)
pkl_save(f'{run_dir}/eval_res.pkl', eval_res)
print('Evaluation result:', eval_res)
eval_res['dataset'] = args.dataset + str(args.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
print("Finished.")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_spot.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import numpy as np
import matplotlib.pyplot as plt
from spot import SPOT
import numpy as np
import time
import datetime
import datautils
from sklearn.metrics import f1_score, precision_score, recall_score
import argparse
from sklearn.metrics import f1_score, precision_score, recall_score
from other_anomaly_baselines.metrics.affiliation.metrics import pr_from_events
from other_anomaly_baselines.metrics.vus.metrics import get_range_vus_roc
from other_anomaly_baselines.metrics.affiliation.generics import convert_vector_to_events
from tadpak import evaluate
def adjustment(gt, pred):
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
return gt, pred
# consider delay threshold and missing segments
def get_range_proba(predict, label, delay=7):
splits = np.where(label[1:] != label[:-1])[0] + 1
is_anomaly = label[0] == 1
new_predict = np.array(predict)
pos = 0
for sp in splits:
if is_anomaly:
if 1 in predict[pos:min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
is_anomaly = not is_anomaly
pos = sp
sp = len(label)
if is_anomaly: # anomaly in the end
if 1 in predict[pos: min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
return new_predict
parser = argparse.ArgumentParser()
# parser.add_argument('dataset', help='The dataset name')
# parser.add_argument('run_name', help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
# parser.add_argument('--dataset', default='kpi', help='The dataset name, yahoo, kpi')
parser.add_argument('--dataset', default='kpi',
help='The dataset name, yahoo, kpi') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--is_multi', type=bool, default=False, help='The dataset name, yahoo, kpi')
parser.add_argument('--datapath', default='./datasets/', help='')
parser.add_argument('--index', type=int, default=143, help='')
parser.add_argument('--gpu', type=int, default=0, help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch_size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='spot_0715.csv')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
if args.is_multi:
from datasets.data_loader import get_loader_segment
data_path = args.datapath + args.dataset + '/'
print("data_path = ", data_path)
_, train_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100,
mode='train',
dataset=args.dataset)
all_train_data = train_data_loader.train
all_train_labels = None
all_train_timestamps = None
all_test_data = train_data_loader.test
all_test_labels = train_data_loader.test_labels
all_test_timestamps = None
delay = 5
all_train_data = np.squeeze(all_train_data)
all_test_data = np.squeeze(all_test_data)
print("all_train_data test_data, test_labels.shape = ", all_train_data.shape, all_test_data.shape,
all_test_labels.shape)
# train_data = np.expand_dims(all_train_data, axis=0)
# print("train_data.shape = ", train_data.shape)
print("Read Success!!!")
else:
# dataset = 'kpi' # yahoo, kpi
print('Loading data... ', end='')
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
labels = []
pred = []
scores = []
if args.is_multi:
train_data = all_train_data # initial batch
train_labels = all_train_labels
train_timestamps = all_train_timestamps
test_data = all_test_data # stream
test_labels = all_test_labels
test_timestamps = all_test_timestamps
# print("k = ", k, ", test_data.shape = ", test_data.shape, test_labels.shape)
q = 1e-3 # risk parameter
s = SPOT(q) # SPOT object
s.fit(train_data, test_data) # data import
s.initialize() # initialization step
results = s.run() # run
# print()
test_thresholds = results['thresholds']
idx_anoamly = results['alarms']
print("test_thresholds = ", test_thresholds[:10])
print("idx_anoamly = ", idx_anoamly[:10])
print("scores = ", results['scores'][:10])
# scores = results['scores']
test_pred = np.zeros(len(test_thresholds))
test_pred[idx_anoamly] = 1
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
scores.append(results['scores'])
else:
for k in all_test_data:
train_data = all_train_data[k] # initial batch
train_labels = all_train_labels[k]
train_timestamps = all_train_timestamps[k]
test_data = all_test_data[k] # stream
test_labels = all_test_labels[k]
test_timestamps = all_test_timestamps[k]
# print("k = ", k, ", test_data.shape = ", test_data.shape, test_labels.shape)
q = 1e-3 # risk parameter
s = SPOT(q) # SPOT object
s.fit(train_data, test_data) # data import
s.initialize() # initialization step
results = s.run() # run
# print()
test_thresholds = results['thresholds']
idx_anoamly = results['alarms']
test_pred = np.zeros(len(test_thresholds))
test_pred[idx_anoamly] = 1
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
scores.append(results['scores'])
labels = np.concatenate(labels)
pred = np.concatenate(pred)
scores = np.concatenate(scores)
if args.is_multi:
# labels = np.asarray(labels_log, np.int64)[0]
# pred = np.asarray(res_log, np.int64)[0]
# print("labels.shape = ", labels.shape, labels[:5])
# print("pred.shape = ", pred.shape, pred[:5])
events_pred = convert_vector_to_events(pred)
events_gt = convert_vector_to_events(labels)
Trange = (0, len(labels))
affiliation = pr_from_events(events_pred, events_gt, Trange)
vus_results = get_range_vus_roc(labels, pred, 100) # default slidingWindow = 100
pred_scores = scores
results_f1_pa_k_10 = evaluate.evaluate(pred_scores, labels, k=10)
# results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
results_f1_pa_k_50 = evaluate.evaluate(pred_scores, labels, k=50)
results_f1_pa_k_90 = evaluate.evaluate(pred_scores, labels, k=90)
labels, pred = adjustment(labels, pred)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"],
'f1_pa_10': results_f1_pa_k_10['best_f1_w_pa'],
'f1_pa_50': results_f1_pa_k_50['best_f1_w_pa'],
'f1_pa_90': results_f1_pa_k_90['best_f1_w_pa'],
}
print("eval_res = ", eval_res)
else:
print('\nf1:', f1_score(labels, pred))
print('precision:', precision_score(labels, pred))
print('recall:', recall_score(labels, pred))
events_pred = convert_vector_to_events(pred)
events_gt = convert_vector_to_events(labels)
Trange = (0, len(labels))
affiliation = pr_from_events(events_pred, events_gt, Trange)
vus_results = get_range_vus_roc(labels, pred, 100) # default slidingWindow = 100
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"]
}
results_f1_pa_k_10 = evaluate.evaluate(scores, labels, k=10)
# results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
results_f1_pa_k_50 = evaluate.evaluate(scores, labels, k=50)
results_f1_pa_k_90 = evaluate.evaluate(scores, labels, k=90)
eval_res['f1_pa_10'] = results_f1_pa_k_10['best_f1_w_pa']
eval_res['f1_pa_50'] = results_f1_pa_k_50['best_f1_w_pa']
eval_res['f1_pa_90'] = results_f1_pa_k_90['best_f1_w_pa']
eval_res['dataset'] = args.dataset + str(args.index)
import pandas as pd
import os
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
print("Finished.")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_spot_multi.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import numpy as np
import matplotlib.pyplot as plt
from spot import SPOT
import numpy as np
import time
import datetime
import datautils
from sklearn.metrics import f1_score, precision_score, recall_score
import argparse
from sklearn.metrics import f1_score, precision_score, recall_score
from other_anomaly_baselines.metrics.affiliation.metrics import pr_from_events
from other_anomaly_baselines.metrics.vus.metrics import get_range_vus_roc
from other_anomaly_baselines.metrics.affiliation.generics import convert_vector_to_events
from tadpak import evaluate
def adjustment(gt, pred):
anomaly_state = False
for i in range(len(gt)):
if gt[i] == 1 and pred[i] == 1 and not anomaly_state:
anomaly_state = True
for j in range(i, 0, -1):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
for j in range(i, len(gt)):
if gt[j] == 0:
break
else:
if pred[j] == 0:
pred[j] = 1
elif gt[i] == 0:
anomaly_state = False
if anomaly_state:
pred[i] = 1
return gt, pred
# consider delay threshold and missing segments
def get_range_proba(predict, label, delay=7):
splits = np.where(label[1:] != label[:-1])[0] + 1
is_anomaly = label[0] == 1
new_predict = np.array(predict)
pos = 0
for sp in splits:
if is_anomaly:
if 1 in predict[pos:min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
is_anomaly = not is_anomaly
pos = sp
sp = len(label)
if is_anomaly: # anomaly in the end
if 1 in predict[pos: min(pos + delay + 1, sp)]:
new_predict[pos: sp] = 1
else:
new_predict[pos: sp] = 0
return new_predict
parser = argparse.ArgumentParser()
# parser.add_argument('dataset', help='The dataset name')
# parser.add_argument('run_name', help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
# parser.add_argument('--dataset', default='kpi', help='The dataset name, yahoo, kpi')
parser.add_argument('--dataset', default='UCR',
help='The dataset name, yahoo, kpi') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--is_multi', type=bool, default=True, help='The dataset name, yahoo, kpi')
parser.add_argument('--datapath', default='./datasets/', help='')
parser.add_argument('--index', type=int, default=241, help='') ## 79, 108, 187, 203, 239, 240, 241]
parser.add_argument('--gpu', type=int, default=0, help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch_size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='spot_ucr_0727.csv')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
if args.is_multi:
from datasets.data_loader import get_loader_segment
data_path = args.datapath + args.dataset + '/'
print("data_path = ", data_path)
_, train_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100,
mode='train',
dataset=args.dataset)
all_train_data = train_data_loader.train
all_train_labels = None
all_train_timestamps = None
all_test_data = train_data_loader.test
all_test_labels = train_data_loader.test_labels
all_test_timestamps = None
delay = 5
all_train_data = np.squeeze(all_train_data)
all_test_data = np.squeeze(all_test_data)
print("all_train_data test_data, test_labels.shape = ", all_train_data.shape, all_test_data.shape,
all_test_labels.shape)
# train_data = np.expand_dims(all_train_data, axis=0)
# print("train_data.shape = ", train_data.shape)
print("Read Success!!!")
else:
# dataset = 'kpi' # yahoo, kpi
print('Loading data... ', end='')
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
labels = []
pred = []
scores = []
if args.is_multi:
train_data = all_train_data # initial batch
train_labels = all_train_labels
train_timestamps = all_train_timestamps
test_data = all_test_data # stream
test_labels = all_test_labels
test_timestamps = all_test_timestamps
# print("k = ", k, ", test_data.shape = ", test_data.shape, test_labels.shape)
q = 1e-3 # risk parameter
s = SPOT(q) # SPOT object
s.fit(train_data, test_data) # data import
s.initialize() # initialization step
results = s.run() # run
# print()
test_thresholds = results['thresholds']
idx_anoamly = results['alarms']
print("test_thresholds = ", test_thresholds[:10])
print("idx_anoamly = ", idx_anoamly[:10])
print("scores = ", results['scores'][:10])
# scores = results['scores']
test_pred = np.zeros(len(test_thresholds))
test_pred[idx_anoamly] = 1
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
scores.append(results['scores'])
else:
for k in all_test_data:
train_data = all_train_data[k] # initial batch
train_labels = all_train_labels[k]
train_timestamps = all_train_timestamps[k]
test_data = all_test_data[k] # stream
test_labels = all_test_labels[k]
test_timestamps = all_test_timestamps[k]
# print("k = ", k, ", test_data.shape = ", test_data.shape, test_labels.shape)
q = 1e-3 # risk parameter
s = SPOT(q) # SPOT object
s.fit(train_data, test_data) # data import
s.initialize() # initialization step
results = s.run() # run
# print()
test_thresholds = results['thresholds']
idx_anoamly = results['alarms']
test_pred = np.zeros(len(test_thresholds))
test_pred[idx_anoamly] = 1
test_pred = get_range_proba(test_pred, test_labels, delay)
labels.append(test_labels)
pred.append(test_pred)
scores.append(results['scores'])
labels = np.concatenate(labels)
pred = np.concatenate(pred)
scores = np.concatenate(scores)
if args.is_multi:
# labels = np.asarray(labels_log, np.int64)[0]
# pred = np.asarray(res_log, np.int64)[0]
# print("labels.shape = ", labels.shape, labels[:5])
# print("pred.shape = ", pred.shape, pred[:5])
if args.index == 79 or args.index == 108 or args.index == 187 or args.index == 203:
labels, pred = adjustment(labels, pred)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": None,
"Affiliation recall": None,
"R_AUC_ROC": None,
"R_AUC_PR": None,
"VUS_ROC": None,
"VUS_PR": None,
'f1_pa_10': None,
'f1_pa_50': None,
'f1_pa_90': None,
}
else:
events_pred = convert_vector_to_events(pred)
events_gt = convert_vector_to_events(labels)
Trange = (0, len(labels))
affiliation = pr_from_events(events_pred, events_gt, Trange)
vus_results = get_range_vus_roc(labels, pred, 100) # default slidingWindow = 100
pred_scores = scores
if args.index == 239 or args.index == 240:
labels, pred = adjustment(labels, pred)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"],
'f1_pa_10': None,
'f1_pa_50': None,
'f1_pa_90': None,
}
else:
results_f1_pa_k_10 = evaluate.evaluate(pred_scores, labels, k=10)
# results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
results_f1_pa_k_50 = evaluate.evaluate(pred_scores, labels, k=50)
results_f1_pa_k_90 = evaluate.evaluate(pred_scores, labels, k=90)
labels, pred = adjustment(labels, pred)
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"],
'f1_pa_10': results_f1_pa_k_10['best_f1_w_pa'],
'f1_pa_50': results_f1_pa_k_50['best_f1_w_pa'],
'f1_pa_90': results_f1_pa_k_90['best_f1_w_pa'],
}
print("eval_res = ", eval_res)
else:
print('\nf1:', f1_score(labels, pred))
print('precision:', precision_score(labels, pred))
print('recall:', recall_score(labels, pred))
events_pred = convert_vector_to_events(pred)
events_gt = convert_vector_to_events(labels)
Trange = (0, len(labels))
affiliation = pr_from_events(events_pred, events_gt, Trange)
vus_results = get_range_vus_roc(labels, pred, 100) # default slidingWindow = 100
eval_res = {
'f1': f1_score(labels, pred),
'precision': precision_score(labels, pred),
'recall': recall_score(labels, pred),
"Affiliation precision": affiliation['precision'],
"Affiliation recall": affiliation['recall'],
"R_AUC_ROC": vus_results["R_AUC_ROC"],
"R_AUC_PR": vus_results["R_AUC_PR"],
"VUS_ROC": vus_results["VUS_ROC"],
"VUS_PR": vus_results["VUS_PR"]
}
results_f1_pa_k_10 = evaluate.evaluate(scores, labels, k=10)
# results_f1_pa_k_30 = evaluate.evaluate(pred, labels, k=30)
results_f1_pa_k_50 = evaluate.evaluate(scores, labels, k=50)
results_f1_pa_k_90 = evaluate.evaluate(scores, labels, k=90)
eval_res['f1_pa_10'] = results_f1_pa_k_10['best_f1_w_pa']
eval_res['f1_pa_50'] = results_f1_pa_k_50['best_f1_w_pa']
eval_res['f1_pa_90'] = results_f1_pa_k_90['best_f1_w_pa']
eval_res['dataset'] = args.dataset + str(args.index)
import pandas as pd
import os
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
print("Finished.")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_timesnet.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import argparse
import os
import torch
from other_anomaly_baselines.exp_anomaly_detection import Exp_Anomaly_Detection
from other_anomaly_baselines.datasets.data_loader import get_loader_segment
import random
import numpy as np
if __name__ == '__main__':
fix_seed = 42
random.seed(fix_seed)
torch.manual_seed(fix_seed)
np.random.seed(fix_seed)
parser = argparse.ArgumentParser(description='TimesNet')
# basic config
parser.add_argument('--task_name', type=str, default='anomaly_detection',
help='task name, options:[long_term_forecast, short_term_forecast, imputation, classification, anomaly_detection]')
parser.add_argument('--is_training', type=int, default=1, help='status')
parser.add_argument('--model_id', type=str, default='test', help='model id')
parser.add_argument('--model', type=str, default='TimesNet',
help='model name, options: [Autoformer, Transformer, TimesNet]')
# data loader
parser.add_argument('--data', type=str, default='UCR', help='dataset type') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--root_path', type=str, default='./data/ETT/', help='root path of the data file')
# parser.add_argument('--data_path', type=str, default='ETTh1.csv', help='data file')
parser.add_argument('--features', type=str, default='M',
help='forecasting task, options:[M, S, MS]; M:multivariate predict multivariate, S:univariate predict univariate, MS:multivariate predict univariate')
parser.add_argument('--target', type=str, default='OT', help='target feature in S or MS task')
parser.add_argument('--freq', type=str, default='h',
help='freq for time features encoding, options:[s:secondly, t:minutely, h:hourly, d:daily, b:business days, w:weekly, m:monthly], you can also use more detailed freq like 15min or 3h')
parser.add_argument('--checkpoints', type=str, default='./checkpoints/', help='location of model checkpoints')
# forecasting task
parser.add_argument('--seq_len', type=int, default=100, help='input sequence length')
parser.add_argument('--label_len', type=int, default=48, help='start token length')
parser.add_argument('--pred_len', type=int, default=0, help='prediction sequence length')
parser.add_argument('--seasonal_patterns', type=str, default='Monthly', help='subset for M4')
parser.add_argument('--inverse', action='store_true', help='inverse output data', default=False)
# inputation task
parser.add_argument('--mask_rate', type=float, default=0.25, help='mask ratio')
# anomaly detection task
parser.add_argument('--anomaly_ratio', type=float, default=0.5, help='prior anomaly ratio (%)')
# model define
parser.add_argument('--top_k', type=int, default=3, help='for TimesBlock')
parser.add_argument('--num_kernels', type=int, default=6, help='for Inception')
parser.add_argument('--enc_in', type=int, default=1, help='encoder input size') ## 55 for MSL, 38 for SMD, SMAP for 25, PSM for 25, SWAT for 51, NIPS_TS_Swan for 38,
parser.add_argument('--dec_in', type=int, default=7, help='decoder input size') ## NIPS_TS_Water for 38, UCR for 1
parser.add_argument('--c_out', type=int, default=1, help='output size')
parser.add_argument('--d_model', type=int, default=8, help='dimension of model')
parser.add_argument('--n_heads', type=int, default=8, help='num of heads')
parser.add_argument('--e_layers', type=int, default=1, help='num of encoder layers')
parser.add_argument('--d_layers', type=int, default=1, help='num of decoder layers')
parser.add_argument('--d_ff', type=int, default=16, help='dimension of fcn')
parser.add_argument('--moving_avg', type=int, default=25, help='window size of moving average')
parser.add_argument('--factor', type=int, default=1, help='attn factor')
parser.add_argument('--distil', action='store_false',
help='whether to use distilling in encoder, using this argument means not using distilling',
default=True)
parser.add_argument('--dropout', type=float, default=0.1, help='dropout')
parser.add_argument('--embed', type=str, default='timeF',
help='time features encoding, options:[timeF, fixed, learned]')
parser.add_argument('--activation', type=str, default='gelu', help='activation')
parser.add_argument('--output_attention', action='store_true', help='whether to output attention in ecoder')
# optimization
parser.add_argument('--num_workers', type=int, default=10, help='data loader num workers')
parser.add_argument('--itr', type=int, default=1, help='experiments times')
parser.add_argument('--train_epochs', type=int, default=3, help='train epochs')
parser.add_argument('--batch_size', type=int, default=32, help='batch size of train input data')
parser.add_argument('--patience', type=int, default=3, help='early stopping patience')
parser.add_argument('--learning_rate', type=float, default=0.0001, help='optimizer learning rate')
parser.add_argument('--des', type=str, default='test', help='exp description')
parser.add_argument('--loss', type=str, default='MSE', help='loss function')
parser.add_argument('--lradj', type=str, default='type1', help='adjust learning rate')
parser.add_argument('--use_amp', action='store_true', help='use automatic mixed precision training', default=False)
# GPU
parser.add_argument('--use_gpu', type=bool, default=True, help='use gpu')
parser.add_argument('--gpu', type=int, default=0, help='gpu')
parser.add_argument('--use_multi_gpu', action='store_true', help='use multiple gpus', default=False)
parser.add_argument('--devices', type=str, default='0,1,2,3', help='device ids of multile gpus')
# de-stationary projector params
parser.add_argument('--p_hidden_dims', type=int, nargs='+', default=[128, 128],
help='hidden layer dimensions of projector (List)')
parser.add_argument('--p_hidden_layers', type=int, default=2, help='number of hidden layers in projector')
# Default
parser.add_argument('--index', type=int, default=137)
parser.add_argument('--data_path', type=str, default='datasets/')
parser.add_argument('--win_size', type=int, default=100)
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='timesnet_ucr_0727.csv')
args = parser.parse_args()
args.use_gpu = True if torch.cuda.is_available() and args.use_gpu else False
if args.use_gpu and args.use_multi_gpu:
args.devices = args.devices.replace(' ', '')
device_ids = args.devices.split(',')
args.device_ids = [int(id_) for id_ in device_ids]
args.gpu = args.device_ids[0]
print('Args in experiment:')
print(args)
Exp = Exp_Anomaly_Detection
train_loader, train_set = get_loader_segment(args.index, args.data_path + args.data, batch_size=args.batch_size,
win_size=args.win_size, mode='train', dataset=args.data)
val_loader, val_set = get_loader_segment(args.index, args.data_path + args.data, batch_size=args.batch_size,
win_size=args.win_size, mode='val', dataset=args.data)
test_loader, test_set = get_loader_segment(args.index, args.data_path + args.data, batch_size=args.batch_size,
win_size=args.win_size, mode='test', dataset=args.data)
train_set = train_set.train
val_set = val_set.val
test_set = test_set.test
print("train_set.shape = ", train_set.shape, ", test_set.shape = ", test_set.shape, test_set.shape[-1])
args.enc_in = train_set.shape[-1]
args.c_out = train_set.shape[-1]
if args.is_training:
for ii in range(args.itr):
# setting record of experiments
setting = '{}_{}_{}_{}_ft{}_sl{}_ll{}_pl{}_dm{}_nh{}_el{}_dl{}_df{}_fc{}_eb{}_dt{}_{}_{}'.format(
args.task_name,
args.model_id,
args.model,
args.data,
args.features,
args.seq_len,
args.label_len,
args.pred_len,
args.d_model,
args.n_heads,
args.e_layers,
args.d_layers,
args.d_ff,
args.factor,
args.embed,
args.distil,
args.des, ii)
exp = Exp(args, train_set, train_loader, val_set, val_loader, test_set, test_loader) # set experiments
print('>>>>>>>start training : {}>>>>>>>>>>>>>>>>>>>>>>>>>>'.format(setting))
exp.train(setting)
print('>>>>>>>testing : {}<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<'.format(setting))
eval_res = exp.test(setting, dataset=args.data, ucr_index=args.index)
torch.cuda.empty_cache()
print("result_dict = ", eval_res)
eval_res['dataset'] = args.data + str(args.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
else:
ii = 0
setting = '{}_{}_{}_{}_ft{}_sl{}_ll{}_pl{}_dm{}_nh{}_el{}_dl{}_df{}_fc{}_eb{}_dt{}_{}_{}'.format(
args.task_name,
args.model_id,
args.model,
args.data,
args.features,
args.seq_len,
args.label_len,
args.pred_len,
args.d_model,
args.n_heads,
args.e_layers,
args.d_layers,
args.d_ff,
args.factor,
args.embed,
args.distil,
args.des, ii)
exp = Exp(args) # set experiments
print('>>>>>>>testing : {}<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<'.format(setting))
exp.test(setting, test=1)
torch.cuda.empty_cache()
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_timesnet_uni.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import argparse
import os
import torch
from other_anomaly_baselines.exp_anomaly_detection import Exp_Anomaly_Detection
from other_anomaly_baselines.datasets.data_loader import get_loader_segment
import numpy as np
import random
import datautils
from torch.utils.data import TensorDataset, DataLoader
class UniLoader(object):
def __init__(self, data_set, win_size, step, mode="train"):
self.mode = mode
self.step = step
self.win_size = win_size
self.train = data_set
def __len__(self):
"""
Number of images in the object dataset.
"""
return (self.train.shape[0] - self.win_size) // self.step + 1
def __getitem__(self, index):
index = index * self.step
return np.float32(self.train[index:index + self.win_size])
if __name__ == '__main__':
fix_seed = 42
random.seed(fix_seed)
torch.manual_seed(fix_seed)
np.random.seed(fix_seed)
parser = argparse.ArgumentParser(description='TimesNet')
# basic config
parser.add_argument('--task_name', type=str, default='anomaly_detection',
help='task name, options:[long_term_forecast, short_term_forecast, imputation, classification, anomaly_detection]')
parser.add_argument('--is_training', type=int, default=1, help='status')
parser.add_argument('--model_id', type=str, default='test', help='model id')
parser.add_argument('--model', type=str, default='TimesNet',
help='model name, options: [Autoformer, Transformer, TimesNet]')
# data loader
# parser.add_argument('--data', type=str, default='UCR', help='dataset type') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--data', type=str, default='kpi') ## kpi, yahoo
parser.add_argument('--root_path', type=str, default='./data/ETT/', help='root path of the data file')
# parser.add_argument('--data_path', type=str, default='ETTh1.csv', help='data file')
parser.add_argument('--features', type=str, default='M',
help='forecasting task, options:[M, S, MS]; M:multivariate predict multivariate, S:univariate predict univariate, MS:multivariate predict univariate')
parser.add_argument('--target', type=str, default='OT', help='target feature in S or MS task')
parser.add_argument('--freq', type=str, default='h',
help='freq for time features encoding, options:[s:secondly, t:minutely, h:hourly, d:daily, b:business days, w:weekly, m:monthly], you can also use more detailed freq like 15min or 3h')
parser.add_argument('--checkpoints', type=str, default='./checkpoints/', help='location of model checkpoints')
# forecasting task
parser.add_argument('--seq_len', type=int, default=100, help='input sequence length')
parser.add_argument('--label_len', type=int, default=48, help='start token length')
parser.add_argument('--pred_len', type=int, default=0, help='prediction sequence length')
parser.add_argument('--seasonal_patterns', type=str, default='Monthly', help='subset for M4')
parser.add_argument('--inverse', action='store_true', help='inverse output data', default=False)
# inputation task
parser.add_argument('--mask_rate', type=float, default=0.25, help='mask ratio')
# anomaly detection task
parser.add_argument('--anomaly_ratio', type=float, default=1, help='prior anomaly ratio (%)')
# model define
parser.add_argument('--top_k', type=int, default=3, help='for TimesBlock')
parser.add_argument('--num_kernels', type=int, default=6, help='for Inception')
parser.add_argument('--enc_in', type=int, default=1, help='encoder input size') ## 55 for MSL, 38 for SMD, SMAP for 25, PSM for 25, SWAT for 51, NIPS_TS_Swan for 38,
parser.add_argument('--dec_in', type=int, default=1, help='decoder input size') ## NIPS_TS_Water for 38, UCR for 1
parser.add_argument('--c_out', type=int, default=1, help='output size')
parser.add_argument('--d_model', type=int, default=8, help='dimension of model')
parser.add_argument('--n_heads', type=int, default=8, help='num of heads')
parser.add_argument('--e_layers', type=int, default=1, help='num of encoder layers')
parser.add_argument('--d_layers', type=int, default=1, help='num of decoder layers')
parser.add_argument('--d_ff', type=int, default=16, help='dimension of fcn')
parser.add_argument('--moving_avg', type=int, default=25, help='window size of moving average')
parser.add_argument('--factor', type=int, default=1, help='attn factor')
parser.add_argument('--distil', action='store_false',
help='whether to use distilling in encoder, using this argument means not using distilling',
default=True)
parser.add_argument('--dropout', type=float, default=0.1, help='dropout')
parser.add_argument('--embed', type=str, default='timeF',
help='time features encoding, options:[timeF, fixed, learned]')
parser.add_argument('--activation', type=str, default='gelu', help='activation')
parser.add_argument('--output_attention', action='store_true', help='whether to output attention in ecoder')
# optimization
parser.add_argument('--num_workers', type=int, default=10, help='data loader num workers')
parser.add_argument('--itr', type=int, default=1, help='experiments times')
parser.add_argument('--train_epochs', type=int, default=1, help='train epochs')
parser.add_argument('--batch_size', type=int, default=32, help='batch size of train input data')
parser.add_argument('--patience', type=int, default=3, help='early stopping patience')
parser.add_argument('--learning_rate', type=float, default=0.0001, help='optimizer learning rate')
parser.add_argument('--des', type=str, default='test', help='exp description')
parser.add_argument('--loss', type=str, default='MSE', help='loss function')
parser.add_argument('--lradj', type=str, default='type1', help='adjust learning rate')
parser.add_argument('--use_amp', action='store_true', help='use automatic mixed precision training', default=False)
# GPU
parser.add_argument('--use_gpu', type=bool, default=True, help='use gpu')
parser.add_argument('--gpu', type=int, default=1, help='gpu')
parser.add_argument('--use_multi_gpu', action='store_true', help='use multiple gpus', default=False)
parser.add_argument('--devices', type=str, default='0,1,2,3', help='device ids of multile gpus')
# de-stationary projector params
parser.add_argument('--p_hidden_dims', type=int, nargs='+', default=[128, 128],
help='hidden layer dimensions of projector (List)')
parser.add_argument('--p_hidden_layers', type=int, default=2, help='number of hidden layers in projector')
# Default
parser.add_argument('--index', type=int, default=137)
parser.add_argument('--data_path', type=str, default='datasets/')
parser.add_argument('--win_size', type=int, default=100)
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='timesnet_uni_0722.csv')
args = parser.parse_args()
args.use_gpu = True if torch.cuda.is_available() and args.use_gpu else False
# 检查路径是否存在,如果不存在则赋值为新的路径
if not os.path.exists(args.save_dir):
args.save_dir = '/SSD/lz/tsm_ptms_anomaly_detection/result/'
print("save_dir = ", args.save_dir) # 输出检查
if args.use_gpu and args.use_multi_gpu:
args.devices = args.devices.replace(' ', '')
device_ids = args.devices.split(',')
args.device_ids = [int(id_) for id_ in device_ids]
args.gpu = args.device_ids[0]
print('Args in experiment:')
print(args)
Exp = Exp_Anomaly_Detection
# dataset = 'MSL'
# _train_loader, _train_set = get_loader_segment(args.index, args.data_path + dataset,
# batch_size=args.batch_size,
# win_size=args.win_size, mode='train', dataset=dataset)
#
# _train_set = _train_set.train
#
# print("_train_set.shape = ", _train_set.shape)
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(
args.data)
train_data = datautils.gen_ano_train_data(all_train_data)
print("train_data.shape = ", train_data.shape)
_train_data = train_data[0]
print("000train_data.shape = ", train_data.shape, type(train_data))
_train_data = np.array(_train_data)
print("111_train_data.shape = ", _train_data.shape, type(_train_data))
train_dataset = UniLoader(_train_data, args.win_size, 1)
train_loader = DataLoader(dataset=train_dataset,
batch_size=args.batch_size,
shuffle=True,
num_workers=2,
drop_last=True)
val_loader = train_loader
args.input_c = train_data.shape[-1]
args.output_c = train_data.shape[-1]
# train_loader, train_set = get_loader_segment(args.index, args.data_path + args.data, batch_size=args.batch_size,
# win_size=args.win_size, mode='train', dataset=args.data)
# val_loader, val_set = get_loader_segment(args.index, args.data_path + args.data, batch_size=args.batch_size,
# win_size=args.win_size, mode='val', dataset=args.data)
# test_loader, test_set = get_loader_segment(args.index, args.data_path + args.data, batch_size=args.batch_size,
# win_size=args.win_size, mode='test', dataset='UCR')
# train_set = train_set.train
# val_set = val_set.val
# test_set = test_set.test
print("train_set.shape = ", _train_data.shape)
args.enc_in = _train_data.shape[-1]
args.c_out = _train_data.shape[-1]
if args.is_training:
for ii in range(args.itr):
# setting record of experiments
setting = '{}_{}_{}_{}_ft{}_sl{}_ll{}_pl{}_dm{}_nh{}_el{}_dl{}_df{}_fc{}_eb{}_dt{}_{}_{}'.format(
args.task_name,
args.model_id,
args.model,
args.data,
args.features,
args.seq_len,
args.label_len,
args.pred_len,
args.d_model,
args.n_heads,
args.e_layers,
args.d_layers,
args.d_ff,
args.factor,
args.embed,
args.distil,
args.des, ii)
exp = Exp(args, _train_data, train_loader, _train_data, train_loader, _train_data, train_loader) # set experiments
print('>>>>>>>start training : {}>>>>>>>>>>>>>>>>>>>>>>>>>>'.format(setting))
exp.train_uni(setting)
print('>>>>>>>testing : {}<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<'.format(setting))
eval_res = exp.test_uni(setting, all_train_data, all_test_data, all_test_labels, all_test_timestamps, delay, args)
torch.cuda.empty_cache()
print("result_dict = ", eval_res)
eval_res['dataset'] = args.data + str(args.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
else:
ii = 0
setting = '{}_{}_{}_{}_ft{}_sl{}_ll{}_pl{}_dm{}_nh{}_el{}_dl{}_df{}_fc{}_eb{}_dt{}_{}_{}'.format(
args.task_name,
args.model_id,
args.model,
args.data,
args.features,
args.seq_len,
args.label_len,
args.pred_len,
args.d_model,
args.n_heads,
args.e_layers,
args.d_layers,
args.d_ff,
args.factor,
args.embed,
args.distil,
args.des, ii)
exp = Exp(args) # set experiments
print('>>>>>>>testing : {}<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<'.format(setting))
exp.test(setting, test=1)
torch.cuda.empty_cache()
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_ts2vec.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import torch
import numpy as np
import argparse
import os
import sys
import time
import datetime
from ts2vec import TS2Vec
from other_anomaly_baselines.tasks.anomaly_detection import eval_anomaly_detection, eval_anomaly_detection_coldstart
import datautils
from utils import init_dl_program, name_with_datetime, pkl_save, data_dropout
def save_checkpoint_callback(
save_every=1,
unit='epoch'
):
assert unit in ('epoch', 'iter')
def callback(model, loss):
n = model.n_epochs if unit == 'epoch' else model.n_iters
if n % save_every == 0:
model.save(f'{run_dir}/model_{n}.pkl')
return callback
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', default='kpi', help='The dataset name, yahoo, kpi') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--is_multi', default=False, help='The dataset name, yahoo, kpi')
parser.add_argument('--datapath', default='./datasets/', help='')
parser.add_argument('--index', type=int, default=143, help='')
parser.add_argument('--run_name', default='ts2Vec', help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
parser.add_argument('--loader', type=str, default='anomaly', help='The data loader used to load the experimental data. This can be set to anomaly or anomaly_coldstart')
parser.add_argument('--gpu', type=int, default=0, help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch_size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--lr', type=float, default=0.001, help='The learning rate (defaults to 0.001)')
parser.add_argument('--repr-dims', type=int, default=320, help='The representation dimension (defaults to 320)')
parser.add_argument('--max-train-length', type=int, default=3000, help='For sequence with a length greater than , it would be cropped into some sequences, each of which has a length less than (defaults to 3000)')
parser.add_argument('--iters', type=int, default=None, help='The number of iterations')
parser.add_argument('--epochs', type=int, default=None, help='The number of epochs')
parser.add_argument('--save-every', type=int, default=None, help='Save the checkpoint every iterations/epochs')
parser.add_argument('--seed', type=int, default=None, help='The random seed')
parser.add_argument('--max-threads', type=int, default=None, help='The maximum allowed number of threads used by this process')
parser.add_argument('--eval', default='True', help='Whether to perform evaluation after training') ## action="store_true"
parser.add_argument('--irregular', type=float, default=0, help='The ratio of missing observations (defaults to 0)')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='ts2vec_uni_0723.csv')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
device = init_dl_program(args.gpu, seed=args.seed, max_threads=args.max_threads)
print('Loading data... ', end='')
if args.loader == 'anomaly':
task_type = 'anomaly_detection'
if args.is_multi:
from datasets.data_loader import get_loader_segment
data_path = args.datapath + args.dataset + '/'
print("data_path = ", data_path)
_, train_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100, mode='train',
dataset=args.dataset)
# val_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100, mode='val',
# dataset=args.dataset)
# test_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100, mode='test',
# dataset=args.dataset)
all_train_data = train_data_loader.train
all_train_labels = None
all_train_timestamps = None
all_test_data = train_data_loader.test
all_test_labels = train_data_loader.test_labels
all_test_timestamps = None
delay = 5
print("all_train_data test_data, test_labels.shape = ", all_train_data.shape, all_test_data.shape, all_test_labels.shape)
train_data = np.expand_dims(all_train_data, axis=0)
print("train_data.shape = ", train_data.shape)
print("Read Success!!!")
else:
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
train_data = datautils.gen_ano_train_data(all_train_data)
elif args.loader == 'anomaly_coldstart':
task_type = 'anomaly_detection_coldstart'
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
train_data, _, _, _ = datautils.load_UCR('FordA')
else:
raise ValueError(f"Unknown loader {args.loader}.")
if args.irregular > 0:
raise ValueError(f"Task type {task_type} is not supported when irregular>0.")
print('done')
config = dict(
batch_size=args.batch_size,
lr=args.lr,
output_dims=args.repr_dims,
max_train_length=args.max_train_length
)
if args.save_every is not None:
unit = 'epoch' if args.epochs is not None else 'iter'
config[f'after_{unit}_callback'] = save_checkpoint_callback(args.save_every, unit)
run_dir = 'training/' + args.dataset + '__' + name_with_datetime(args.run_name)
os.makedirs(run_dir, exist_ok=True)
t = time.time()
model = TS2Vec(
input_dims=train_data.shape[-1],
device=device,
**config
)
loss_log = model.fit(
train_data,
n_epochs=args.epochs,
n_iters=args.iters,
verbose=True
)
model.save(f'{run_dir}/model.pkl')
t = time.time() - t
print(f"\nTraining time: {datetime.timedelta(seconds=t)}")
print("Training time(seconds): ", t)
if args.eval:
if task_type == 'anomaly_detection':
out, eval_res = eval_anomaly_detection(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data,
all_test_labels, all_test_timestamps, delay, is_multi=args.is_multi)
elif task_type == 'anomaly_detection_coldstart':
out, eval_res = eval_anomaly_detection_coldstart(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay)
else:
assert False
pkl_save(f'{run_dir}/out.pkl', out)
pkl_save(f'{run_dir}/eval_res.pkl', eval_res)
print('Evaluation result:', eval_res)
eval_res['dataset'] = args.dataset + str(args.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
print("Finished.")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/train_ts2vec_multi.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import torch
import numpy as np
import argparse
import os
import sys
import time
import datetime
from ts2vec import TS2Vec
from other_anomaly_baselines.tasks.anomaly_detection import eval_anomaly_detection, eval_anomaly_detection_coldstart
import datautils
from utils import init_dl_program, name_with_datetime, pkl_save, data_dropout
def save_checkpoint_callback(
save_every=1,
unit='epoch'
):
assert unit in ('epoch', 'iter')
def callback(model, loss):
n = model.n_epochs if unit == 'epoch' else model.n_iters
if n % save_every == 0:
model.save(f'{run_dir}/model_{n}.pkl')
return callback
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', default='UCR',
help='The dataset name, yahoo, kpi') ## SMD, MSL, SMAP, PSM, SWAT, NIPS_TS_Swan, UCR, NIPS_TS_Water
parser.add_argument('--is_multi', default=True, help='The dataset name, yahoo, kpi')
parser.add_argument('--datapath', default='./datasets/', help='')
parser.add_argument('--index', type=int, default=203, help='') ## [79, 108, 187, 203]
parser.add_argument('--run_name', default='ts2Vec',
help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
parser.add_argument('--loader', type=str, default='anomaly',
help='The data loader used to load the experimental data. This can be set to anomaly or anomaly_coldstart')
parser.add_argument('--gpu', type=int, default=0,
help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch_size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--lr', type=float, default=0.001, help='The learning rate (defaults to 0.001)')
parser.add_argument('--repr-dims', type=int, default=320, help='The representation dimension (defaults to 320)')
parser.add_argument('--max-train-length', type=int, default=3000,
help='For sequence with a length greater than , it would be cropped into some sequences, each of which has a length less than (defaults to 3000)')
parser.add_argument('--iters', type=int, default=None, help='The number of iterations')
parser.add_argument('--epochs', type=int, default=None, help='The number of epochs')
parser.add_argument('--save-every', type=int, default=None,
help='Save the checkpoint every iterations/epochs')
parser.add_argument('--seed', type=int, default=None, help='The random seed')
parser.add_argument('--max-threads', type=int, default=None,
help='The maximum allowed number of threads used by this process')
parser.add_argument('--eval', default='True',
help='Whether to perform evaluation after training') ## action="store_true"
parser.add_argument('--irregular', type=float, default=0, help='The ratio of missing observations (defaults to 0)')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/tsm_ptms_anomaly_detection/result/')
parser.add_argument('--save_csv_name', type=str, default='ts2vec_ucr_0727.csv')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
device = init_dl_program(args.gpu, seed=args.seed, max_threads=args.max_threads)
print('Loading data... ', end='')
if args.loader == 'anomaly':
task_type = 'anomaly_detection'
if args.is_multi:
from datasets.data_loader import get_loader_segment
data_path = args.datapath + args.dataset + '/'
print("data_path = ", data_path)
_, train_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100,
mode='train',
dataset=args.dataset)
# val_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100, mode='val',
# dataset=args.dataset)
# test_data_loader = get_loader_segment(args.index, data_path, args.batch_size, win_size=100, step=100, mode='test',
# dataset=args.dataset)
all_train_data = train_data_loader.train
all_train_labels = None
all_train_timestamps = None
all_test_data = train_data_loader.test
all_test_labels = train_data_loader.test_labels
all_test_timestamps = None
delay = 5
print("all_train_data test_data, test_labels.shape = ", all_train_data.shape, all_test_data.shape,
all_test_labels.shape)
train_data = np.expand_dims(all_train_data, axis=0)
print("train_data.shape = ", train_data.shape)
print("Read Success!!!")
else:
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(
args.dataset)
train_data = datautils.gen_ano_train_data(all_train_data)
elif args.loader == 'anomaly_coldstart':
task_type = 'anomaly_detection_coldstart'
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(
args.dataset)
train_data, _, _, _ = datautils.load_UCR('FordA')
else:
raise ValueError(f"Unknown loader {args.loader}.")
if args.irregular > 0:
raise ValueError(f"Task type {task_type} is not supported when irregular>0.")
print('done')
config = dict(
batch_size=args.batch_size,
lr=args.lr,
output_dims=args.repr_dims,
max_train_length=args.max_train_length
)
if args.save_every is not None:
unit = 'epoch' if args.epochs is not None else 'iter'
config[f'after_{unit}_callback'] = save_checkpoint_callback(args.save_every, unit)
run_dir = 'training/' + args.dataset + '__' + name_with_datetime(args.run_name)
os.makedirs(run_dir, exist_ok=True)
t = time.time()
model = TS2Vec(
input_dims=train_data.shape[-1],
device=device,
**config
)
loss_log = model.fit(
train_data,
n_epochs=args.epochs,
n_iters=args.iters,
verbose=True
)
model.save(f'{run_dir}/model.pkl')
t = time.time() - t
print(f"\nTraining time: {datetime.timedelta(seconds=t)}")
print("Training time(seconds): ", t)
if args.eval:
if task_type == 'anomaly_detection':
out, eval_res = eval_anomaly_detection(model, all_train_data, all_train_labels, all_train_timestamps,
all_test_data,
all_test_labels, all_test_timestamps, delay, is_multi=args.is_multi, ucr_index=args.index)
elif task_type == 'anomaly_detection_coldstart':
out, eval_res = eval_anomaly_detection_coldstart(model, all_train_data, all_train_labels,
all_train_timestamps, all_test_data, all_test_labels,
all_test_timestamps, delay)
else:
assert False
pkl_save(f'{run_dir}/out.pkl', out)
pkl_save(f'{run_dir}/eval_res.pkl', eval_res)
print('Evaluation result:', eval_res)
eval_res['dataset'] = args.dataset + str(args.index)
import pandas as pd
# 转换字典为 DataFrame
df = pd.DataFrame([eval_res])
# 指定保存路径
save_path = args.save_dir + args.save_csv_name
# 转换字典为 DataFrame
df_new = pd.DataFrame([eval_res])
# 检查文件是否存在
if os.path.exists(save_path):
# 文件存在,读取现有数据
df_existing = pd.read_csv(save_path, index_col=0)
# 将新数据附加到现有数据框中
df_combined = pd.concat([df_existing, df_new], ignore_index=True)
else:
# 文件不存在,创建新的数据框
df_combined = df_new
# 保存 DataFrame 为 CSV 文件
df_combined.to_csv(save_path, index=True, index_label="id")
print("Finished.")
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/ts2vec.py
================================================
import torch
import torch.nn.functional as F
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
from models import TSEncoder
from models.losses import hierarchical_contrastive_loss
from utils import take_per_row, split_with_nan, centerize_vary_length_series, torch_pad_nan
import math
class TS2Vec:
'''The TS2Vec model'''
def __init__(
self,
input_dims,
output_dims=320,
hidden_dims=64,
depth=10,
device='cuda',
lr=0.001,
batch_size=16,
max_train_length=None,
temporal_unit=0,
after_iter_callback=None,
after_epoch_callback=None
):
''' Initialize a TS2Vec model.
Args:
input_dims (int): The input dimension. For a univariate time series, this should be set to 1.
output_dims (int): The representation dimension.
hidden_dims (int): The hidden dimension of the encoder.
depth (int): The number of hidden residual blocks in the encoder.
device (int): The gpu used for training and inference.
lr (int): The learning rate.
batch_size (int): The batch size.
max_train_length (Union[int, NoneType]): The maximum allowed sequence length for training. For sequence with a length greater than , it would be cropped into some sequences, each of which has a length less than .
temporal_unit (int): The minimum unit to perform temporal contrast. When training on a very long sequence, this param helps to reduce the cost of time and memory.
after_iter_callback (Union[Callable, NoneType]): A callback function that would be called after each iteration.
after_epoch_callback (Union[Callable, NoneType]): A callback function that would be called after each epoch.
'''
super().__init__()
self.device = device
self.lr = lr
self.batch_size = batch_size
self.max_train_length = max_train_length
self.temporal_unit = temporal_unit
self._net = TSEncoder(input_dims=input_dims, output_dims=output_dims, hidden_dims=hidden_dims, depth=depth).to(self.device)
self.net = torch.optim.swa_utils.AveragedModel(self._net)
self.net.update_parameters(self._net)
self.after_iter_callback = after_iter_callback
self.after_epoch_callback = after_epoch_callback
self.n_epochs = 0
self.n_iters = 0
def fit(self, train_data, n_epochs=None, n_iters=None, verbose=False):
''' Training the TS2Vec model.
Args:
train_data (numpy.ndarray): The training data. It should have a shape of (n_instance, n_timestamps, n_features). All missing data should be set to NaN.
n_epochs (Union[int, NoneType]): The number of epochs. When this reaches, the training stops.
n_iters (Union[int, NoneType]): The number of iterations. When this reaches, the training stops. If both n_epochs and n_iters are not specified, a default setting would be used that sets n_iters to 200 for a dataset with size <= 100000, 600 otherwise.
verbose (bool): Whether to print the training loss after each epoch.
Returns:
loss_log: a list containing the training losses on each epoch.
'''
assert train_data.ndim == 3
if n_iters is None and n_epochs is None:
n_iters = 200 if train_data.size <= 100000 else 600 # default param for n_iters ### n_iters = 200 if train_data.size <= 100000 else 600 # default param for n_iters
if self.max_train_length is not None:
sections = train_data.shape[1] // self.max_train_length
if sections >= 2:
train_data = np.concatenate(split_with_nan(train_data, sections, axis=1), axis=0)
# train_data: (n_instance*sections, max_train_length, n_features)
temporal_missing = np.isnan(train_data).all(axis=-1).any(axis=0) # (max_train_length)
if temporal_missing[0] or temporal_missing[-1]: # whether the head or tail exists nan
train_data = centerize_vary_length_series(train_data)
train_data = train_data[~np.isnan(train_data).all(axis=2).all(axis=1)]
# delete the sequence (max_train_length, n_features) contains only nan
train_dataset = TensorDataset(torch.from_numpy(train_data).to(torch.float))
train_loader = DataLoader(train_dataset, batch_size=min(self.batch_size, len(train_dataset)), shuffle=True, drop_last=True)
optimizer = torch.optim.AdamW(self._net.parameters(), lr=self.lr)
loss_log = []
while True:
if n_epochs is not None and self.n_epochs >= n_epochs:
break
cum_loss = 0
n_epoch_iters = 0
interrupted = False
for batch in train_loader:
if n_iters is not None and self.n_iters >= n_iters:
interrupted = True
break
x = batch[0] #(batch_size, n_timestamps, n_features)
# print("#####################")
# raise Exception('my personal exception!')
if self.max_train_length is not None and x.size(1) > self.max_train_length:
window_offset = np.random.randint(x.size(1) - self.max_train_length + 1)
x = x[:, window_offset : window_offset + self.max_train_length]
x = x.to(self.device)
ts_l = x.size(1)
crop_l = np.random.randint(low=2 ** (self.temporal_unit + 1), high=ts_l+1)
crop_left = np.random.randint(ts_l - crop_l + 1)
crop_right = crop_left + crop_l
crop_eleft = np.random.randint(crop_left + 1)
crop_eright = np.random.randint(low=crop_right, high=ts_l + 1)
crop_offset = np.random.randint(low=-crop_eleft, high=ts_l - crop_eright + 1, size=x.size(0))
optimizer.zero_grad()
out1 = self._net(take_per_row(x, crop_offset + crop_eleft, crop_right - crop_eleft))
out1 = out1[:, -crop_l:]
out2 = self._net(take_per_row(x, crop_offset + crop_left, crop_eright - crop_left))
out2 = out2[:, :crop_l]
loss = hierarchical_contrastive_loss(
out1,
out2,
temporal_unit=self.temporal_unit
)
loss.backward()
optimizer.step()
self.net.update_parameters(self._net)
cum_loss += loss.item()
n_epoch_iters += 1
self.n_iters += 1
if self.after_iter_callback is not None:
self.after_iter_callback(self, loss.item())
if interrupted:
break
cum_loss /= n_epoch_iters
loss_log.append(cum_loss)
if verbose:
print(f"Epoch #{self.n_epochs}: loss={cum_loss}")
self.n_epochs += 1
if self.after_epoch_callback is not None:
self.after_epoch_callback(self, cum_loss)
return loss_log
def _eval_with_pooling(self, x, mask=None, slicing=None, encoding_window=None):
out = self.net(x.to(self.device, non_blocking=True), mask)
if encoding_window == 'full_series':
if slicing is not None:
out = out[:, slicing]
out = F.max_pool1d(
out.transpose(1, 2),
kernel_size = out.size(1),
).transpose(1, 2)
elif isinstance(encoding_window, int):
out = F.max_pool1d(
out.transpose(1, 2),
kernel_size = encoding_window,
stride = 1,
padding = encoding_window // 2
).transpose(1, 2)
if encoding_window % 2 == 0:
out = out[:, :-1]
if slicing is not None:
out = out[:, slicing]
elif encoding_window == 'multiscale':
p = 0
reprs = []
while (1 << p) + 1 < out.size(1):
t_out = F.max_pool1d(
out.transpose(1, 2),
kernel_size = (1 << (p + 1)) + 1,
stride = 1,
padding = 1 << p
).transpose(1, 2)
if slicing is not None:
t_out = t_out[:, slicing]
reprs.append(t_out)
p += 1
out = torch.cat(reprs, dim=-1)
else:
if slicing is not None:
out = out[:, slicing]
return out.cpu()
def encode(self, data, mask=None, encoding_window=None, casual=False, sliding_length=None, sliding_padding=0, batch_size=None):
''' Compute representations using the model.
Args:
data (numpy.ndarray): This should have a shape of (n_instance, n_timestamps, n_features). All missing data should be set to NaN.
mask (str): The mask used by encoder can be specified with this parameter. This can be set to 'binomial', 'continuous', 'all_true', 'all_false' or 'mask_last'.
encoding_window (Union[str, int]): When this param is specified, the computed representation would the max pooling over this window. This can be set to 'full_series', 'multiscale' or an integer specifying the pooling kernel size.
casual (bool): When this param is set to True, the future informations would not be encoded into representation of each timestamp.
sliding_length (Union[int, NoneType]): The length of sliding window. When this param is specified, a sliding inference would be applied on the time series.
sliding_padding (int): This param specifies the contextual data length used for inference every sliding windows.
batch_size (Union[int, NoneType]): The batch size used for inference. If not specified, this would be the same batch size as training.
Returns:
repr: The representations for data.
'''
assert self.net is not None, 'please train or load a net first'
assert data.ndim == 3
print("data.shape = ", data.shape)
if batch_size is None:
batch_size = self.batch_size
n_samples, ts_l, _ = data.shape
org_training = self.net.training
self.net.eval()
dataset = TensorDataset(torch.from_numpy(data).to(torch.float))
loader = DataLoader(dataset, batch_size=batch_size)
with torch.no_grad():
output = []
for batch in loader:
x = batch[0]
if sliding_length is not None:
reprs = []
if n_samples < batch_size:
calc_buffer = []
calc_buffer_l = 0
for i in range(0, ts_l, sliding_length):
l = i - sliding_padding
r = i + sliding_length + (sliding_padding if not casual else 0)
x_sliding = torch_pad_nan(
x[:, max(l, 0) : min(r, ts_l)],
left=-l if l<0 else 0,
right=r-ts_l if r>ts_l else 0,
dim=1
)
if n_samples < batch_size:
if calc_buffer_l + n_samples > batch_size:
out = self._eval_with_pooling(
torch.cat(calc_buffer, dim=0),
mask,
slicing=slice(sliding_padding, sliding_padding+sliding_length),
encoding_window=encoding_window
)
reprs += torch.split(out, n_samples)
calc_buffer = []
calc_buffer_l = 0
calc_buffer.append(x_sliding)
calc_buffer_l += n_samples
else:
out = self._eval_with_pooling(
x_sliding,
mask,
slicing=slice(sliding_padding, sliding_padding+sliding_length),
encoding_window=encoding_window
)
reprs.append(out)
if n_samples < batch_size:
if calc_buffer_l > 0:
out = self._eval_with_pooling(
torch.cat(calc_buffer, dim=0),
mask,
slicing=slice(sliding_padding, sliding_padding+sliding_length),
encoding_window=encoding_window
)
reprs += torch.split(out, n_samples)
calc_buffer = []
calc_buffer_l = 0
out = torch.cat(reprs, dim=1)
if encoding_window == 'full_series':
out = F.max_pool1d(
out.transpose(1, 2).contiguous(),
kernel_size = out.size(1),
).squeeze(1)
else:
out = self._eval_with_pooling(x, mask, encoding_window=encoding_window)
if encoding_window == 'full_series':
out = out.squeeze(1)
output.append(out)
output = torch.cat(output, dim=0)
self.net.train(org_training)
return output.numpy()
def save(self, fn):
''' Save the model to a file.
Args:
fn (str): filename.
'''
torch.save(self.net.state_dict(), fn)
def load(self, fn):
''' Load the model from a file.
Args:
fn (str): filename.
'''
state_dict = torch.load(fn, map_location=self.device)
self.net.load_state_dict(state_dict)
================================================
FILE: ts_anomaly_detection_methods/other_anomaly_baselines/utils.py
================================================
import os
import numpy as np
import pickle
import torch
import random
from datetime import datetime
def pkl_save(name, var):
with open(name, 'wb') as f:
pickle.dump(var, f)
def pkl_load(name):
with open(name, 'rb') as f:
return pickle.load(f)
def torch_pad_nan(arr, left=0, right=0, dim=0):
if left > 0:
padshape = list(arr.shape)
padshape[dim] = left
arr = torch.cat((torch.full(padshape, np.nan), arr), dim=dim)
if right > 0:
padshape = list(arr.shape)
padshape[dim] = right
arr = torch.cat((arr, torch.full(padshape, np.nan)), dim=dim)
return arr
### pad the 'nan' for the sequence to get teh target_length
# both_side=True: padding at both the head and the tail with half pad_size
# both_side=False: padding at the tail with whole pad_size
def pad_nan_to_target(array, target_length, axis=0, both_side=False):
assert array.dtype in [np.float16, np.float32, np.float64]
pad_size = target_length - array.shape[axis]
if pad_size <= 0:
return array
npad = [(0, 0)] * array.ndim # 1
if both_side:
npad[axis] = (pad_size // 2, pad_size - pad_size//2)
else:
npad[axis] = (0, pad_size)
return np.pad(array, pad_width=npad, mode='constant', constant_values=np.nan)
### split the sequence into some (sections) subsequence,
### and padding at the tail to have same length (max_train_length)
def split_with_nan(x, sections, axis=0):
assert x.dtype in [np.float16, np.float32, np.float64]
arrs = np.array_split(x, sections, axis=axis)
target_length = arrs[0].shape[axis]
for i in range(len(arrs)):
arrs[i] = pad_nan_to_target(arrs[i], target_length, axis=axis)
return arrs
def take_per_row(A, indx, num_elem):
all_indx = indx[:,None] + np.arange(num_elem)
return A[torch.arange(all_indx.shape[0])[:,None], all_indx]
def centerize_vary_length_series(x):
prefix_zeros = np.argmax(~np.isnan(x).all(axis=-1), axis=1)
suffix_zeros = np.argmax(~np.isnan(x[:, ::-1]).all(axis=-1), axis=1)
offset = (prefix_zeros + suffix_zeros) // 2 - prefix_zeros
rows, column_indices = np.ogrid[:x.shape[0], :x.shape[1]]
offset[offset < 0] += x.shape[1]
column_indices = column_indices - offset[:, np.newaxis]
return x[rows, column_indices]
def data_dropout(arr, p):
B, T = arr.shape[0], arr.shape[1]
mask = np.full(B*T, False, dtype=np.bool)
ele_sel = np.random.choice(
B*T,
size=int(B*T*p),
replace=False
)
mask[ele_sel] = True
res = arr.copy()
res[mask.reshape(B, T)] = np.nan
return res
def name_with_datetime(prefix='default'):
now = datetime.now()
return prefix + '_' + now.strftime("%Y%m%d_%H%M%S")
def init_dl_program(
device_name,
seed=None,
use_cudnn=True,
deterministic=False,
benchmark=False,
use_tf32=False,
max_threads=None
):
import torch
if max_threads is not None:
torch.set_num_threads(max_threads) # intraop
if torch.get_num_interop_threads() != max_threads:
torch.set_num_interop_threads(max_threads) # interop
try:
import mkl
except:
pass
else:
mkl.set_num_threads(max_threads)
if seed is not None:
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if isinstance(device_name, (str, int)):
device_name = [device_name]
devices = []
for t in reversed(device_name):
t_device = torch.device(t)
devices.append(t_device)
if t_device.type == 'cuda':
assert torch.cuda.is_available()
torch.cuda.set_device(t_device)
if seed is not None:
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
devices.reverse()
torch.backends.cudnn.enabled = use_cudnn
torch.backends.cudnn.deterministic = deterministic
torch.backends.cudnn.benchmark = benchmark
if hasattr(torch.backends.cudnn, 'allow_tf32'):
torch.backends.cudnn.allow_tf32 = use_tf32
torch.backends.cuda.matmul.allow_tf32 = use_tf32
return devices if len(devices) > 1 else devices[0]
def split_N_pad(series,window_size):
assert len(series.shape)==2
ret=[]
l=series.shape[0]
for i in range(l//window_size):
ret.append(series[i*window_size:(i+1)*window_size,:])
left = l-l//window_size*window_size
'''TODO:pad'''
if left!=0:
p = np.zeros([window_size,series.shape[1]])
p[:left,:]=series[-left:,:]
ret.append(p)
return ret
'''for AT'''
def data_slice(data,window_size):
'''
data : [size,length,dim]
'''
assert len(data.shape)==3
ret=[]
for i in range(data.shape[0]):
series = data[i]
ret.extend(split_N_pad(series,window_size))
return np.array(ret)
================================================
FILE: ts_classification_methods/.gitignore
================================================
*.log
dilated_result
fcn_result
fcn_result_v2
result_v2
rnn_result
__pychache__
data/__pychache__
logs_v2
logs_v3
logs
result_v3
cache
/.idea/
/test/test_env.py
/test/test_path.py
/test/train_test.py
/ts2vec_cls/train_nonlin.py
/tloss_cls/*.csv
/selftime_cls/*.csv
/tst_cls/*.csv
/tst_cls/results
/scripts/generator_uea.py
/test/fcn_uea.py
/test/dilated_uea.py
/visualize_test.py
/test/Wine/
/test/Wine/test_dir
/scripts/ex1_trasfer_finetune.sh
/test/train_test2.py
/ts2vec_cls/train_tsm_test.py
/test/train_test3.py
/tstcc_cls/semi_main_ucr.py
/tfc_cls/new_dataset_test.py
/tfc_cls/result_transfer/readme
/result_tsm_lin/test_readme
/result_tsm_lin/test_readme
/result_tsm_lin/
/tfc_cls/
================================================
FILE: ts_classification_methods/README.md
================================================
# A Survey on Time-Series Pre-Trained Models
This is the training code for our paper *"A Survey on Time-Series Pre-Trained Models"*
## Pre-Trained Models on Time Series Classification
### Usage (Transfer Learning)
1. To pre-train a model on your own dataset, run
```bash
python train.py --dataroot [your UCR datasets directory] --task [type of pre-training task: classification or reconstruction] --dataset [name of the dataset you want to pretrain on] --backbone [fcn or dilated] --mode pretrain ...
```
2. To finetune (classification) the model on a dataset, run
```bash
python train.py --dataroot [your UCR datasets directory] --dataset [name of the dataset you want to finetune on] --source_dataset [the dataset you pretrained on] --save_dir [the directory to save the pretrained weights] --mode finetune ...
```
run
```bash
python train.py -h
```
For detailed options and examples, please refer to ```scripts/transfer_pretrain_finetune.sh```
### Usage (Transformer and Contrastive Learning)
| ID | Method | Architecture | Year | Press. | Source Code |
|-----| ---- |------------------------------------|------|-------------------| ---- |
| 1 | [TS2Vec](https://www.aaai.org/AAAI22Papers/AAAI-8809.YueZ.pdf) | Contrastive Learning | 2022 | AAAI | [github-link](https://github.com/yuezhihan/ts2vec) |
| 2 | [TS-TCC](https://www.ijcai.org/proceedings/2021/0324.pdf) | Contrastive Learning & Transformer | 2021 | IJCAI | [github-link](https://github.com/emadeldeen24/TS-TCC) |
| 3 | [TST](https://dl.acm.org/doi/10.1145/3447548.3467401) | Transformer | 2021 | KDD | [github-link](https://github.com/gzerveas/mvts_transformer) |
| 4 | [Triplet-loss](https://papers.nips.cc/paper/2019/hash/53c6de78244e9f528eb3e1cda69699bb-Abstract.html) | Contrastive Learning | 2019 | NeurIPS | [github-link](https://github.com/White-Link/UnsupervisedScalableRepresentationLearningTimeSeries) |
| 5 | [SelfTime](https://openreview.net/pdf?id=qFQTP00Q0kp) | Contrastive Learning | 2021 | Submitted to ICLR | [github-link](https://github.com/haoyfan/SelfTime) |
| 6 | [TimesNet](https://openreview.net/pdf?id=ju_Uqw384Oq) | CNN | 2023 | ICLR | [github-link](https://github.com/thuml/TimesNet) |
| 7 | [PatchTST](https://openreview.net/pdf?id=Jbdc0vTOcol) | Transformer | 2023 | ICLR | [github-link](https://github.com/yuqinie98/PatchTST) |
| 8 | [GPT4TS](https://arxiv.org/abs/2302.11939) | GPT2 | 2023 | NeurIPS | [github-link](https://github.com/DAMO-DI-ML/NeurIPS2023-One-Fits-All) |
1. Pre-training and classification using **TS2Vec** model on a UCR dataset, run
```bash
python train_tsm.py --dataroot [your UCR datasets directory] --normalize_way single ...
```
For detailed options and examples, please refer to ```ts2vec_cls/scripts/ts2vec_tsm_single_norm.sh```
2. Pre-training and classification using **TS-TCC** model on a UCR dataset, run
```bash
python main_ucr.py --dataset [name of the ucr dataset] --device cuda:0 --save_csv_name tstcc_ucr_ --seed 42;
```
For detailed options and examples, please refer to ```tstcc_cls/scripts/fivefold_tstcc_ucr.sh```
3. To pre-train and classification using **TST** model on a UCR dataset, run
```bash
python src/main.py --dataset [dataset name] --data_dir [path of the dataset] --batch_size [batch size] --task pretrain_and_finetune --epochs
```
To do classification task using Transformer encoder on a UCR dataset, run
```bash
python src/main.py --dataset [dataset name] --data_dir [path of the dataset] --batch_size [batch size] --task classification --epochs
```
For detailed options and examples for training on the full UCR128 dataset, please refer to ```tst_cls/scripts/pretrain_finetune.sh``` and ```tst_cls/scripts/classification.sh```or simply run
```bash
python src/main.py -h
```
4. Pre-training and classification using **Triplet-loss** model on a UCR dataset, run
```bash
python ucr.py --dataset [name of the ucr dataset] --path [your UCR datasets directory] --hyper [hyperparameters file path(./default_hyperparameters.json for default option)] --cuda
```
For detailed options and examples, please refer to ```tloss_cls/scripts/ucr.sh```
Pre-training and classification using **Triplet-loss** model on a UEA dataset, run
```bash
python uea.py --dataset [name of the uea dataset] --path [your UEA datasets directory] --hyper [hyperparameters file path(./default_hyperparameters.json for default option)] --cuda
```
For detailed options and examples, please refer to ```tloss_cls/scripts/uea.sh```
5. Pre-training and classification using **SelfTime** model on a UCR dataset, run
```bash
python -u train_ssl.py --dataset_name [dataset name] --model_name SelfTime --ucr_path [your UCR datasets directory] --random_seed 42
```
6. Pre-training and classification using **TimesNet** model on a UCR dataset, run
```bash
python -u main_timesnet_ucr.py --dataset_name [dataset name] --model_name SelfTime --ucr_path [your UCR datasets directory] --random_seed 42
```
7. Classification using **PatchTST** model on a UCR dataset, run
```bash
python -u main_patchtst_ucr.py --dataset_name [dataset name] --model_name SelfTime --ucr_path [your UCR datasets directory] --random_seed 42
```
8. Fine-tuning and classification using **GPT4TS** model on a UCR dataset, run
```bash
python -u main_gpt4ts_ucr.py --dataset_name [dataset name] --model_name SelfTime --ucr_path [your UCR datasets directory] --random_seed 42
```
For detailed options and examples, please refer to ```selftime_cls/scripts/ucr.sh```
### Usage (Visualization)
* To get the visualization of model's feature map, run
```bash
python visualize.py --dataroot [your dataset root] --dataset [dataset name] --backbone [encoder backbone] --graph [cam, heatmap or tsne]
```
* We provide weights of Wine and GunPoint dataset for quick start.
================================================
FILE: ts_classification_methods/data/__init__.py
================================================
from .preprocessing import *
================================================
FILE: ts_classification_methods/data/dataloader.py
================================================
import torch
import torch.utils.data as data
# Dataset 仅用来加载5 fold中的一个fold
class UCRDataset(data.Dataset):
def __init__(self, dataset, target):
self.dataset = dataset
# self.dataset = np.expand_dims(self.dataset, 1)
if len(self.dataset.shape) == 2:
self.dataset = torch.unsqueeze(self.dataset, 1) # (num_size, 1, series_length)
self.target = target
def __getitem__(self, index):
return self.dataset[index], self.target[index]
def __len__(self):
return len(self.target)
class UEADataset(data.Dataset):
def __init__(self, dataset, target):
self.dataset = dataset.permute(0, 2, 1) # (num_size, num_dimensions, series_length)
self.target = target
def __getitem__(self, index):
return self.dataset[index], self.target[index]
def __len__(self):
return len(self.target)
if __name__ == '__main__':
pass
'''
train = pd.read_csv('/dev_data/zzj/hzy/datasets/UCR/Adiac/Adiac_TRAIN.tsv', sep='\t', header=None)
train_target = train.iloc[:, 0]
train_x = train.iloc[:, 1:]
print(train_x.to_numpy())
'''
================================================
FILE: ts_classification_methods/data/preprocessing.py
================================================
import os
import numpy as np
import pandas as pd
from scipy.io.arff import loadarff
from sklearn.model_selection import StratifiedKFold
from tslearn.preprocessing import TimeSeriesScalerMeanVariance
def load_data(dataroot, dataset):
train = pd.read_csv(os.path.join(dataroot, dataset, dataset + '_TRAIN.tsv'), sep='\t', header=None)
train_x = train.iloc[:, 1:]
train_target = train.iloc[:, 0]
test = pd.read_csv(os.path.join(dataroot, dataset, dataset + '_TEST.tsv'), sep='\t', header=None)
test_x = test.iloc[:, 1:]
test_target = test.iloc[:, 0]
sum_dataset = pd.concat([train_x, test_x]).to_numpy(dtype=np.float32)
# sum_dataset = sum_dataset.fillna(sum_dataset.mean()).to_numpy(dtype=np.float32)
sum_target = pd.concat([train_target, test_target]).to_numpy(dtype=np.float32)
# sum_target = sum_target.fillna(sum_target.mean()).to_numpy(dtype=np.float32)
num_classes = len(np.unique(sum_target))
return sum_dataset, sum_target, num_classes
def load_UEA(dataroot, dataset):
'''
scipy 1.3.0 or newer is required to load. Otherwise, the data cannot be loaded.
'''
train_data = loadarff(os.path.join(dataroot, dataset, dataset + '_TRAIN.arff'))[0]
test_data = loadarff(os.path.join(dataroot, dataset, dataset + '_TEST.arff'))[0]
def extract_data(data):
res_data = []
res_labels = []
for t_data, t_label in data:
t_data = np.array([d.tolist() for d in t_data])
t_label = t_label.decode("utf-8")
res_data.append(t_data)
res_labels.append(t_label)
return np.array(res_data).swapaxes(1, 2), np.array(res_labels)
train_X, train_y = extract_data(train_data)
test_X, test_y = extract_data(test_data)
labels = np.unique(train_y)
transform = {k: i for i, k in enumerate(labels)}
train_y = np.vectorize(transform.get)(train_y)
test_y = np.vectorize(transform.get)(test_y)
sum_dataset = np.concatenate((train_X, test_X), axis=0,
dtype=np.float32) # (num_size, series_length, num_dimensions)
sum_target = np.concatenate((train_y, test_y), axis=0, dtype=np.float32)
num_classes = len(np.unique(sum_target))
return sum_dataset, sum_target, num_classes
def transfer_labels(labels):
indicies = np.unique(labels)
num_samples = labels.shape[0]
for i in range(num_samples):
new_label = np.argwhere(labels[i] == indicies)[0][0]
labels[i] = new_label
return labels
def k_fold(data, target):
skf = StratifiedKFold(5, shuffle=True)
# skf = StratifiedShuffleSplit(5)
train_sets = []
train_targets = []
val_sets = []
val_targets = []
test_sets = []
test_targets = []
for raw_index, test_index in skf.split(data, target):
raw_set = data[raw_index]
raw_target = target[raw_index]
test_sets.append(data[test_index])
test_targets.append(target[test_index])
train_index, val_index = next(StratifiedKFold(4, shuffle=True).split(raw_set, raw_target))
# train_index, val_index = next(StratifiedShuffleSplit(1).split(raw_set, raw_target))
train_sets.append(raw_set[train_index])
train_targets.append(raw_target[train_index])
val_sets.append(raw_set[val_index])
val_targets.append(raw_target[val_index])
return train_sets, train_targets, val_sets, val_targets, test_sets, test_targets
def normalize_per_series(data):
std_ = data.std(axis=1, keepdims=True)
std_[std_ == 0] = 1.0
return (data - data.mean(axis=1, keepdims=True)) / std_
def normalize_train_val_test(train_set, val_set, test_set):
mean = train_set.mean()
std = train_set.std()
return (train_set - mean) / std, (val_set - mean) / std, (test_set - mean) / std
def normalize_uea_set(data_set):
'''
The function is the same as normalize_per_series, but can be used for multiple variables.
'''
return TimeSeriesScalerMeanVariance().fit_transform(data_set)
def fill_nan_value(train_set, val_set, test_set):
ind = np.where(np.isnan(train_set))
col_mean = np.nanmean(train_set, axis=0)
col_mean[np.isnan(col_mean)] = 1e-6
train_set[ind] = np.take(col_mean, ind[1])
ind_val = np.where(np.isnan(val_set))
val_set[ind_val] = np.take(col_mean, ind_val[1])
ind_test = np.where(np.isnan(test_set))
test_set[ind_test] = np.take(col_mean, ind_test[1])
return train_set, val_set, test_set
if __name__ == '__main__':
pass
================================================
FILE: ts_classification_methods/environment.yaml
================================================
name: from_transfer_to_transformer
channels:
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
dependencies:
- python=3.9.7
- numpy=1.21.2
- pytorch=1.10.2
- scikit-learn=1.0.2
- scipy=1.7.3
- pandas=1.4.1
- tslearn=0.5.2
================================================
FILE: ts_classification_methods/gpt4ts/__init__.py
================================================
================================================
FILE: ts_classification_methods/gpt4ts/gpt4ts_utils.py
================================================
import os
import torch.utils.data as data
import numpy as np
import pandas as pd
from scipy.io.arff import loadarff
from sklearn.model_selection import StratifiedKFold
from tslearn.preprocessing import TimeSeriesScalerMeanVariance
import random
import torch
import torch.nn as nn
def build_dataset(args):
sum_dataset, sum_target, num_classes = load_data(args.dataroot, args.dataset)
sum_target = transfer_labels(sum_target)
return sum_dataset, sum_target, num_classes
def load_data(dataroot, dataset):
train = pd.read_csv(os.path.join(dataroot, dataset, dataset + '_TRAIN.tsv'), sep='\t', header=None)
train_x = train.iloc[:, 1:]
train_target = train.iloc[:, 0]
test = pd.read_csv(os.path.join(dataroot, dataset, dataset + '_TEST.tsv'), sep='\t', header=None)
test_x = test.iloc[:, 1:]
test_target = test.iloc[:, 0]
sum_dataset = pd.concat([train_x, test_x]).to_numpy(dtype=np.float32)
sum_target = pd.concat([train_target, test_target]).to_numpy(dtype=np.float32)
num_classes = len(np.unique(sum_target))
return sum_dataset, sum_target, num_classes
def normalize_per_series(data):
std_ = data.std(axis=1, keepdims=True)
std_[std_ == 0] = 1.0
return (data - data.mean(axis=1, keepdims=True)) / std_
def load_UEA(dataroot, dataset):
'''
scipy 1.3.0 or newer is required to load. Otherwise, the data cannot be loaded.
'''
train_data = loadarff(os.path.join(dataroot, dataset, dataset + '_TRAIN.arff'))[0]
test_data = loadarff(os.path.join(dataroot, dataset, dataset + '_TEST.arff'))[0]
def extract_data(data_set):
res_data = []
res_labels = []
for t_data, t_label in data_set:
t_data = np.array([d.tolist() for d in t_data])
t_label = t_label.decode("utf-8")
res_data.append(t_data)
res_labels.append(t_label)
return np.array(res_data).swapaxes(1, 2), np.array(res_labels)
train_X, train_y = extract_data(train_data)
test_X, test_y = extract_data(test_data)
labels = np.unique(train_y)
transform = {k: i for i, k in enumerate(labels)}
train_y = np.vectorize(transform.get)(train_y)
test_y = np.vectorize(transform.get)(test_y)
sum_dataset = np.concatenate((train_X, test_X), axis=0,
dtype=np.float32) # (num_size, series_length, num_dimensions)
sum_target = np.concatenate((train_y, test_y), axis=0, dtype=np.float32)
num_classes = len(np.unique(sum_target))
return sum_dataset, sum_target, num_classes
def transfer_labels(labels):
indicies = np.unique(labels)
num_samples = labels.shape[0]
for i in range(num_samples):
new_label = np.argwhere(labels[i] == indicies)[0][0]
labels[i] = new_label
return labels
def k_fold(data_set, target):
skf = StratifiedKFold(5, shuffle=True)
# skf = StratifiedShuffleSplit(5)
train_sets = []
train_targets = []
val_sets = []
val_targets = []
test_sets = []
test_targets = []
for raw_index, test_index in skf.split(data_set, target):
raw_set = data_set[raw_index]
raw_target = target[raw_index]
test_sets.append(data_set[test_index])
test_targets.append(target[test_index])
train_index, val_index = next(StratifiedKFold(4, shuffle=True).split(raw_set, raw_target))
# train_index, val_index = next(StratifiedShuffleSplit(1).split(raw_set, raw_target))
train_sets.append(raw_set[train_index])
train_targets.append(raw_target[train_index])
val_sets.append(raw_set[val_index])
val_targets.append(raw_target[val_index])
return train_sets, train_targets, val_sets, val_targets, test_sets, test_targets
def normalize_uea_set(data_set):
'''
The function is the same as normalize_per_series, but can be used for multiple variables.
'''
return TimeSeriesScalerMeanVariance().fit_transform(data_set)
def fill_nan_value(train_set, val_set, test_set):
ind = np.where(np.isnan(train_set))
col_mean = np.nanmean(train_set, axis=0)
col_mean[np.isnan(col_mean)] = 1e-6
train_set[ind] = np.take(col_mean, ind[1])
ind_val = np.where(np.isnan(val_set))
val_set[ind_val] = np.take(col_mean, ind_val[1])
ind_test = np.where(np.isnan(test_set))
test_set[ind_test] = np.take(col_mean, ind_test[1])
return train_set, val_set, test_set
class UEADataset(data.Dataset):
def __init__(self, dataset, target):
self.dataset = dataset.permute(0, 2, 1) # (num_size, num_dimensions, series_length)
self.target = target
def __getitem__(self, index):
return self.dataset[index], self.target[index]
def __len__(self):
return len(self.target)
def save_cls_new_result(args, mean_accu, max_acc, min_acc, std_acc, train_time):
save_path = os.path.join(args.save_dir, '', args.save_csv_name + '_sup_cls_result.csv')
if os.path.exists(save_path):
result_form = pd.read_csv(save_path, index_col=0)
else:
result_form = pd.DataFrame(
columns=['dataset_name', 'mean_accu', 'max_acc', 'min_acc', 'std_acc', 'train_time'])
result_form = result_form.append(
{'dataset_name': args.dataset, 'mean_accu': '%.4f' % mean_accu, 'max_acc': '%.4f' % max_acc,
'min_acc': '%.4f' % min_acc,
'std_acc': '%.4f' % std_acc,
'train_time': '%.4f' % train_time
}, ignore_index=True)
result_form.to_csv(save_path, index=True, index_label="id")
def set_seed(args):
random.seed(args.random_seed)
np.random.seed(args.random_seed)
torch.manual_seed(args.random_seed)
torch.cuda.manual_seed(args.random_seed)
torch.cuda.manual_seed_all(args.random_seed)
def get_all_datasets(data_set, target):
return k_fold(data_set, target)
def cross_entropy():
loss = nn.CrossEntropyLoss()
return loss
def reconstruction_loss():
loss = nn.MSELoss()
return loss
def build_loss(args):
if args.loss == 'cross_entropy':
return cross_entropy()
elif args.loss == 'reconstruction':
return reconstruction_loss()
================================================
FILE: ts_classification_methods/gpt4ts/main_gpt4ts.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import argparse
import time
import numpy as np
import torch
from torch.utils.data import DataLoader
from gpt4ts.gpt4ts_utils import load_UEA, normalize_uea_set, UEADataset, save_cls_new_result, set_seed, fill_nan_value, get_all_datasets, build_loss
from gpt4ts.models.gpt4ts import gpt4ts
def evaluate_gpt4ts(val_loader, model, loss):
val_loss = 0
val_accu = 0
sum_len = 0
for data, target in val_loader:
'''
data, target = data.to(device), target.to(device)
target = target.to(torch.int64)
'''
with torch.no_grad():
val_pred = model(data)
val_loss += loss(val_pred, target).item()
val_accu += torch.sum(torch.argmax(val_pred.data, axis=1) == target)
sum_len += len(target)
return val_loss / sum_len, val_accu / sum_len
if __name__ == '__main__': ##
parser = argparse.ArgumentParser()
# Base setup
parser.add_argument('--backbone', type=str, default='fcn', help='encoder backbone, fcn')
parser.add_argument('--random_seed', type=int, default=42, help='shuffle seed')
# Dataset setup
parser.add_argument('--dataset', type=str, default='LSST',
help='dataset(in ucr)') # LSST Heartbeat Images
# parser.add_argument('--dataroot', type=str, default='../UCRArchive_2018', help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/dev_data/lz/time_series_pretrain/datasets/UCRArchive_2018',
# help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/SSD/lz/UCRArchive_2018', help='path of UCR folder')
parser.add_argument('--dataroot', type=str, default='/dev_data/lz/Multivariate2018_arff', help='path of UEA folder')
parser.add_argument('--num_classes', type=int, default=0, help='number of class')
parser.add_argument('--normalize_way', type=str, default='single', help='single or train_set')
parser.add_argument('--seq_len', type=int, default=46, help='seq_len')
parser.add_argument('--input_size', type=int, default=1, help='input_size')
parser.add_argument('--patch_size', type=int, default=8, help='patch_size')
parser.add_argument('--stride', type=int, default=8, help='stride')
# Semi training
parser.add_argument('--labeled_ratio', type=float, default='0.1', help='0.1, 0.2, 0.4')
# training setup
parser.add_argument('--loss', type=str, default='cross_entropy', help='loss function')
parser.add_argument('--optimizer', type=str, default='adam', help='optimizer')
parser.add_argument('--lr', type=float, default=0.001, help='learning rate')
parser.add_argument('--weight_decay', type=float, default=0.0, help='weight decay')
parser.add_argument('--batch_size', type=int, default=128, help='')
parser.add_argument('--epoch', type=int, default=100, help='training epoch')
parser.add_argument('--cuda', type=str, default='cuda:1')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/time_series_label_noise/result')
parser.add_argument('--save_csv_name', type=str, default='gpt4ts_uea_supervised_0712_')
# classifier setup
parser.add_argument('--classifier', type=str, default='linear', help='type of classifier(linear or nonlinear)')
parser.add_argument('--classifier_input', type=int, default=128, help='input dim of the classifiers')
args = parser.parse_args()
device = torch.device(args.cuda if torch.cuda.is_available() else "cpu")
set_seed(args)
# sum_dataset, sum_target, num_classes = build_dataset(args)
sum_dataset, sum_target, num_classes = load_UEA(args.dataroot, args.dataset)
# args.num_classes = num_classes
# args.seq_len = sum_dataset.shape[1]
args.num_classes = num_classes
args.seq_len = sum_dataset.shape[1]
args.input_size = sum_dataset.shape[2]
while sum_dataset.shape[0] * 0.6 < args.batch_size:
args.batch_size = args.batch_size // 2
print("args.batch_size = ", args.batch_size, ", sum_dataset.shape = ", sum_dataset.shape)
model = gpt4ts(max_seq_len=args.seq_len, num_classes=args.num_classes, var_len=args.input_size, patch_size=args.patch_size, stride=args.stride)
model = model.to(device)
# model, classifier = build_model(args)
# model, classifier = model.to(device), classifier.to(device)
loss = build_loss(args).to(device)
model_init_state = model.state_dict()
# classifier_init_state = classifier.state_dict()
if args.optimizer == 'adam':
optimizer = torch.optim.Adam([{'params': model.parameters()}],
lr=args.lr, weight_decay=args.weight_decay)
elif args.optimizer == 'sgd':
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
losses = []
test_accuracies = []
train_time = 0.0
end_val_epochs = []
for i, train_dataset in enumerate(train_datasets):
t = time.time()
model.load_state_dict(model_init_state)
# classifier.load_state_dict(classifier_init_state)
print('{} fold start training and evaluate'.format(i))
train_target = train_targets[i]
val_dataset = val_datasets[i]
val_target = val_targets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
if args.normalize_way == 'single':
# TODO normalize per series
train_dataset = normalize_uea_set(train_dataset)
val_dataset = normalize_uea_set(val_dataset)
test_dataset = normalize_uea_set(test_dataset)
# else:
# train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
# test_dataset)
train_set = UEADataset(torch.from_numpy(train_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(train_target).type(torch.FloatTensor).to(device).to(torch.int64))
val_set = UEADataset(torch.from_numpy(val_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(val_target).type(torch.FloatTensor).to(device).to(torch.int64))
test_set = UEADataset(torch.from_numpy(test_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(test_target).type(torch.FloatTensor).to(device).to(torch.int64))
train_loader = DataLoader(train_set, batch_size=args.batch_size, num_workers=0, drop_last=True)
val_loader = DataLoader(val_set, batch_size=args.batch_size, num_workers=0)
test_loader = DataLoader(test_set, batch_size=args.batch_size, num_workers=0)
train_loss = []
train_accuracy = []
num_steps = args.epoch // args.batch_size
last_loss = float('inf')
stop_count = 0
increase_count = 0
num_steps = train_set.__len__() // args.batch_size
min_val_loss = float('inf')
test_accuracy = 0
end_val_epoch = 0
for epoch in range(args.epoch):
if stop_count == 50 or increase_count == 50:
print('model convergent at epoch {}, early stopping'.format(epoch))
break
epoch_train_loss = 0
epoch_train_acc = 0
num_iterations = 0
model.train()
train_embed = []
for x, y in train_loader:
optimizer.zero_grad()
pred = model(x)
step_loss = loss(pred, y)
# step_loss.backward(retain_graph=True)
step_loss.backward()
optimizer.step()
epoch_train_loss += step_loss.item()
epoch_train_acc += torch.sum(torch.argmax(pred.data, axis=1) == y) / len(y)
num_iterations += 1
epoch_train_loss /= num_steps
epoch_train_acc /= num_steps
# train_embed = np.concatenate(train_embed)
model.eval()
val_loss, val_accu = evaluate_gpt4ts(val_loader, model, loss)
if min_val_loss > val_loss:
min_val_loss = val_loss
end_val_epoch = epoch
test_loss, test_accuracy = evaluate_gpt4ts(test_loader, model, loss)
if abs(last_loss - val_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if val_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = val_loss
if epoch % 50 == 0:
print(
"epoch : {}, train loss: {} , train accuracy : {}, \ntest_accuracy : {}".format(
epoch, epoch_train_loss, epoch_train_acc, test_accuracy))
test_accuracies.append(test_accuracy)
end_val_epochs.append(end_val_epoch)
t = time.time() - t
train_time += t
print('{} fold finish training'.format(i))
test_accuracies = torch.Tensor(test_accuracies)
print("Training end: mean_test_acc = ", round(torch.mean(test_accuracies).item(), 4),
"traning time (seconds) = ",
round(train_time, 4), ", seed = ", args.random_seed)
test_accuracies = test_accuracies.cpu().numpy()
save_cls_new_result(args, np.mean(test_accuracies), np.max(test_accuracies), np.min(test_accuracies),
np.std(test_accuracies), train_time)
print('Done!')
================================================
FILE: ts_classification_methods/gpt4ts/main_gpt4ts_ucr.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import argparse
import time
import numpy as np
import torch
from torch.utils.data import DataLoader
from gpt4ts.gpt4ts_utils import load_UEA, normalize_uea_set, UEADataset, save_cls_new_result, set_seed, fill_nan_value, get_all_datasets, build_loss, build_dataset, normalize_per_series
from gpt4ts.models.gpt4ts import gpt4ts
def evaluate_gpt4ts(val_loader, model, loss):
val_loss = 0
val_accu = 0
sum_len = 0
for data, target in val_loader:
'''
data, target = data.to(device), target.to(device)
target = target.to(torch.int64)
'''
with torch.no_grad():
val_pred = model(data)
val_loss += loss(val_pred, target).item()
val_accu += torch.sum(torch.argmax(val_pred.data, axis=1) == target)
sum_len += len(target)
return val_loss / sum_len, val_accu / sum_len
if __name__ == '__main__': ##
parser = argparse.ArgumentParser()
# Base setup
parser.add_argument('--backbone', type=str, default='fcn', help='encoder backbone, fcn')
parser.add_argument('--random_seed', type=int, default=42, help='shuffle seed')
# Dataset setup
parser.add_argument('--dataset', type=str, default='CBF',
help='dataset(in ucr)') # LSST Heartbeat Images
# parser.add_argument('--dataroot', type=str, default='../UCRArchive_2018', help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/dev_data/lz/time_series_pretrain/datasets/UCRArchive_2018',
# help='path of UCR folder')
parser.add_argument('--dataroot', type=str, default='/dev_data/lz/UCRArchive_2018', help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/SSD/lz/Multivariate2018_arff', help='path of UEA folder')
parser.add_argument('--num_classes', type=int, default=0, help='number of class')
parser.add_argument('--normalize_way', type=str, default='single', help='single or train_set')
parser.add_argument('--seq_len', type=int, default=46, help='seq_len')
parser.add_argument('--input_size', type=int, default=1, help='input_size')
parser.add_argument('--patch_size', type=int, default=8, help='patch_size')
parser.add_argument('--stride', type=int, default=8, help='stride')
# Semi training
parser.add_argument('--labeled_ratio', type=float, default='0.1', help='0.1, 0.2, 0.4')
# training setup
parser.add_argument('--loss', type=str, default='cross_entropy', help='loss function')
parser.add_argument('--optimizer', type=str, default='adam', help='optimizer')
parser.add_argument('--lr', type=float, default=0.001, help='learning rate')
parser.add_argument('--weight_decay', type=float, default=0.0, help='weight decay')
parser.add_argument('--batch_size', type=int, default=128, help='')
parser.add_argument('--epoch', type=int, default=100, help='training epoch')
parser.add_argument('--cuda', type=str, default='cuda:1')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/time_series_label_noise/result')
parser.add_argument('--save_csv_name', type=str, default='gpt4ts_ucr_supervised_0712_')
# classifier setup
parser.add_argument('--classifier', type=str, default='linear', help='type of classifier(linear or nonlinear)')
parser.add_argument('--classifier_input', type=int, default=128, help='input dim of the classifiers')
args = parser.parse_args()
device = torch.device(args.cuda if torch.cuda.is_available() else "cpu")
set_seed(args)
sum_dataset, sum_target, num_classes = build_dataset(args)
sum_dataset = sum_dataset[:, :, np.newaxis]
# sum_dataset, sum_target, num_classes = load_UEA(args.dataroot, args.dataset)
# args.num_classes = num_classes
# args.seq_len = sum_dataset.shape[1]
args.num_classes = num_classes
args.seq_len = sum_dataset.shape[1]
args.input_size = sum_dataset.shape[2]
while sum_dataset.shape[0] * 0.6 < args.batch_size:
args.batch_size = args.batch_size // 2
print("args.batch_size = ", args.batch_size, ", sum_dataset.shape = ", sum_dataset.shape)
model = gpt4ts(max_seq_len=args.seq_len, num_classes=args.num_classes, var_len=args.input_size, patch_size=args.patch_size, stride=args.stride)
model = model.to(device)
# model, classifier = build_model(args)
# model, classifier = model.to(device), classifier.to(device)
loss = build_loss(args).to(device)
model_init_state = model.state_dict()
# classifier_init_state = classifier.state_dict()
if args.optimizer == 'adam':
optimizer = torch.optim.Adam([{'params': model.parameters()}],
lr=args.lr, weight_decay=args.weight_decay)
elif args.optimizer == 'sgd':
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
losses = []
test_accuracies = []
train_time = 0.0
end_val_epochs = []
for i, train_dataset in enumerate(train_datasets):
t = time.time()
model.load_state_dict(model_init_state)
# classifier.load_state_dict(classifier_init_state)
print('{} fold start training and evaluate'.format(i))
train_target = train_targets[i]
val_dataset = val_datasets[i]
val_target = val_targets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
if args.normalize_way == 'single':
# TODO normalize per series
train_dataset = normalize_per_series(train_dataset)
val_dataset = normalize_per_series(val_dataset)
test_dataset = normalize_per_series(test_dataset)
# else:
# train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
# test_dataset)
train_set = UEADataset(torch.from_numpy(train_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(train_target).type(torch.FloatTensor).to(device).to(torch.int64))
val_set = UEADataset(torch.from_numpy(val_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(val_target).type(torch.FloatTensor).to(device).to(torch.int64))
test_set = UEADataset(torch.from_numpy(test_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(test_target).type(torch.FloatTensor).to(device).to(torch.int64))
train_loader = DataLoader(train_set, batch_size=args.batch_size, num_workers=0, drop_last=True)
val_loader = DataLoader(val_set, batch_size=args.batch_size, num_workers=0)
test_loader = DataLoader(test_set, batch_size=args.batch_size, num_workers=0)
train_loss = []
train_accuracy = []
num_steps = args.epoch // args.batch_size
last_loss = float('inf')
stop_count = 0
increase_count = 0
num_steps = train_set.__len__() // args.batch_size
min_val_loss = float('inf')
test_accuracy = 0
end_val_epoch = 0
for epoch in range(args.epoch):
if stop_count == 50 or increase_count == 50:
print('model convergent at epoch {}, early stopping'.format(epoch))
break
epoch_train_loss = 0
epoch_train_acc = 0
num_iterations = 0
model.train()
train_embed = []
for x, y in train_loader:
optimizer.zero_grad()
pred = model(x)
step_loss = loss(pred, y)
# step_loss.backward(retain_graph=True)
step_loss.backward()
optimizer.step()
epoch_train_loss += step_loss.item()
epoch_train_acc += torch.sum(torch.argmax(pred.data, axis=1) == y) / len(y)
num_iterations += 1
epoch_train_loss /= num_steps
epoch_train_acc /= num_steps
# train_embed = np.concatenate(train_embed)
model.eval()
val_loss, val_accu = evaluate_gpt4ts(val_loader, model, loss)
if min_val_loss > val_loss:
min_val_loss = val_loss
end_val_epoch = epoch
test_loss, test_accuracy = evaluate_gpt4ts(test_loader, model, loss)
if abs(last_loss - val_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if val_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = val_loss
if epoch % 50 == 0:
print(
"epoch : {}, train loss: {} , train accuracy : {}, \ntest_accuracy : {}".format(
epoch, epoch_train_loss, epoch_train_acc, test_accuracy))
test_accuracies.append(test_accuracy)
end_val_epochs.append(end_val_epoch)
t = time.time() - t
train_time += t
print('{} fold finish training'.format(i))
test_accuracies = torch.Tensor(test_accuracies)
print("Training end: mean_test_acc = ", round(torch.mean(test_accuracies).item(), 4),
"traning time (seconds) = ",
round(train_time, 4), ", seed = ", args.random_seed)
test_accuracies = test_accuracies.cpu().numpy()
save_cls_new_result(args, np.mean(test_accuracies), np.max(test_accuracies), np.min(test_accuracies),
np.std(test_accuracies), train_time)
print('Done!')
================================================
FILE: ts_classification_methods/gpt4ts/models/__init__.py
================================================
================================================
FILE: ts_classification_methods/gpt4ts/models/embed.py
================================================
import torch
import torch.nn as nn
import math
class PositionalEmbedding(nn.Module):
def __init__(self, d_model, max_len=25000):
super(PositionalEmbedding, self).__init__()
# Compute the positional encodings once in log space.
pe = torch.zeros(max_len, d_model).float()
pe.require_grad = False
position = torch.arange(0, max_len).float().unsqueeze(1)
div_term = (torch.arange(0, d_model, 2).float()
* -(math.log(10000.0) / d_model)).exp()
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)
def forward(self, x):
return self.pe[:, :x.size(1)]
class TokenEmbedding(nn.Module):
def __init__(self, c_in, d_model):
super(TokenEmbedding, self).__init__()
padding = 1 if torch.__version__ >= '1.5.0' else 2
self.tokenConv = nn.Conv1d(in_channels=c_in, out_channels=d_model,
kernel_size=3, padding=padding, padding_mode='circular', bias=False)
for m in self.modules():
if isinstance(m, nn.Conv1d):
nn.init.kaiming_normal_(
m.weight, mode='fan_in', nonlinearity='leaky_relu')
def forward(self, x):
x = self.tokenConv(x.permute(0, 2, 1)).transpose(1, 2)
return x
class FixedEmbedding(nn.Module):
def __init__(self, c_in, d_model):
super(FixedEmbedding, self).__init__()
w = torch.zeros(c_in, d_model).float()
w.require_grad = False
position = torch.arange(0, c_in).float().unsqueeze(1)
div_term = (torch.arange(0, d_model, 2).float()
* -(math.log(10000.0) / d_model)).exp()
w[:, 0::2] = torch.sin(position * div_term)
w[:, 1::2] = torch.cos(position * div_term)
self.emb = nn.Embedding(c_in, d_model)
self.emb.weight = nn.Parameter(w, requires_grad=False)
def forward(self, x):
return self.emb(x).detach()
class TemporalEmbedding(nn.Module):
def __init__(self, d_model, embed_type='fixed', freq='h'):
super(TemporalEmbedding, self).__init__()
minute_size = 4
hour_size = 24
weekday_size = 7
day_size = 32
month_size = 13
Embed = FixedEmbedding if embed_type == 'fixed' else nn.Embedding
if freq == 't':
self.minute_embed = Embed(minute_size, d_model)
self.hour_embed = Embed(hour_size, d_model)
self.weekday_embed = Embed(weekday_size, d_model)
self.day_embed = Embed(day_size, d_model)
self.month_embed = Embed(month_size, d_model)
def forward(self, x):
x = x.long()
minute_x = self.minute_embed(x[:, :, 4]) if hasattr(
self, 'minute_embed') else 0.
hour_x = self.hour_embed(x[:, :, 3])
weekday_x = self.weekday_embed(x[:, :, 2])
day_x = self.day_embed(x[:, :, 1])
month_x = self.month_embed(x[:, :, 0])
return hour_x + weekday_x + day_x + month_x + minute_x
class TimeFeatureEmbedding(nn.Module):
def __init__(self, d_model, embed_type='timeF', freq='h'):
super(TimeFeatureEmbedding, self).__init__()
freq_map = {'h': 4, 't': 5, 's': 6,
'm': 1, 'a': 1, 'w': 2, 'd': 3, 'b': 3}
d_inp = freq_map[freq]
self.embed = nn.Linear(d_inp, d_model, bias=False)
def forward(self, x):
return self.embed(x)
class DataEmbedding(nn.Module):
def __init__(self, c_in, d_model, embed_type='fixed', freq='h', dropout=0.1):
super(DataEmbedding, self).__init__()
self.value_embedding = TokenEmbedding(c_in=c_in, d_model=d_model)
self.position_embedding = PositionalEmbedding(d_model=d_model)
self.temporal_embedding = TemporalEmbedding(d_model=d_model, embed_type=embed_type,
freq=freq) if embed_type != 'timeF' else TimeFeatureEmbedding(
d_model=d_model, embed_type=embed_type, freq=freq)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x, x_mark):
if x_mark is None:
x = self.value_embedding(x) + self.position_embedding(x)
else:
x = self.value_embedding(
x) + self.temporal_embedding(x_mark) + self.position_embedding(x)
return self.dropout(x)
class DataEmbedding_wo_pos(nn.Module):
def __init__(self, c_in, d_model, embed_type='fixed', freq='h', dropout=0.1):
super(DataEmbedding_wo_pos, self).__init__()
self.value_embedding = TokenEmbedding(c_in=c_in, d_model=d_model)
self.position_embedding = PositionalEmbedding(d_model=d_model)
self.temporal_embedding = TemporalEmbedding(d_model=d_model, embed_type=embed_type,
freq=freq) if embed_type != 'timeF' else TimeFeatureEmbedding(
d_model=d_model, embed_type=embed_type, freq=freq)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x, x_mark):
if x_mark is None:
x = self.value_embedding(x)
else:
x = self.value_embedding(x) + self.temporal_embedding(x_mark)
return self.dropout(x)
class PatchEmbedding(nn.Module):
def __init__(self, d_model, patch_len, stride, dropout):
super(PatchEmbedding, self).__init__()
# Patching
self.patch_len = patch_len
self.stride = stride
self.padding_patch_layer = nn.ReplicationPad1d((0, stride))
# Backbone, Input encoding: projection of feature vectors onto a d-dim vector space
self.value_embedding = TokenEmbedding(patch_len, d_model)
# Positional embedding
self.position_embedding = PositionalEmbedding(d_model)
# Residual dropout
self.dropout = nn.Dropout(dropout)
def forward(self, x):
# do patching
n_vars = x.shape[1]
x = self.padding_patch_layer(x)
x = x.unfold(dimension=-1, size=self.patch_len, step=self.stride)
x = torch.reshape(x, (x.shape[0] * x.shape[1], x.shape[2], x.shape[3]))
# Input encoding
x = self.value_embedding(x) + self.position_embedding(x)
return self.dropout(x), n_vars
class DataEmbedding_wo_time(nn.Module):
def __init__(self, c_in, d_model, embed_type='fixed', freq='h', dropout=0.1):
super(DataEmbedding_wo_time, self).__init__()
self.value_embedding = TokenEmbedding(c_in=c_in, d_model=d_model)
self.position_embedding = PositionalEmbedding(d_model=d_model)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x):
x = self.value_embedding(x) + self.position_embedding(x)
return self.dropout(x)
================================================
FILE: ts_classification_methods/gpt4ts/models/gpt4ts.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
from transformers.models.gpt2.modeling_gpt2 import GPT2Model
from einops import rearrange
from gpt4ts.models.embed import DataEmbedding
class gpt4ts(nn.Module):
def __init__(self, max_seq_len, num_classes, var_len, d_model=768, patch_size=8, stride=8, dropout=0.1):
super(gpt4ts, self).__init__()
self.pred_len = 0
self.seq_len = max_seq_len
self.max_len = max_seq_len
self.patch_size = patch_size
self.stride = stride
self.gpt_layers = 6
self.feat_dim = var_len
self.num_classes = num_classes
self.d_model = d_model
self.patch_num = (self.seq_len - self.patch_size) // self.stride + 1
self.padding_patch_layer = nn.ReplicationPad1d((0, self.stride))
self.patch_num += 1
self.enc_embedding = DataEmbedding(self.feat_dim * self.patch_size, d_model, dropout)
self.gpt2 = GPT2Model.from_pretrained('/SSD/lz/gpt2', output_attentions=True, output_hidden_states=True)
self.gpt2.h = self.gpt2.h[:self.gpt_layers]
self.gpt2 = self.gpt2.apply(self.gpt2._init_weights)
for i, (name, param) in enumerate(self.gpt2.named_parameters()):
if 'ln' in name or 'wpe' in name:
param.requires_grad = True
# param.requires_grad = False
else:
param.requires_grad = False
device = torch.device('cuda:{}'.format(0))
self.gpt2.to(device=device)
self.act = F.gelu
self.dropout = nn.Dropout(0.1)
# self.ln_proj = nn.LayerNorm(config['d_model'] * self.patch_num)
self.ln_proj = nn.LayerNorm(d_model * self.patch_num)
self.out_layer = nn.Linear(d_model * self.patch_num, self.num_classes)
def forward(self, x_enc, x_mark_enc=None):
x_enc = x_enc.permute(0,2,1)
B, L, M = x_enc.shape
# print("x_enc.shape = ", x_enc.shape, B, L, M)
input_x = rearrange(x_enc, 'b l m -> b m l')
# print("input_x.shape = ", input_x.shape)
input_x = self.padding_patch_layer(input_x)
# print("patch1 input_x.shape = ", input_x.shape)
input_x = input_x.unfold(dimension=-1, size=self.patch_size, step=self.stride)
# print("patch2 input_x.shape = ", input_x.shape)
input_x = rearrange(input_x, 'b m n p -> b n (p m)')
# print("patch3 input_x.shape = ", input_x.shape)
outputs = self.enc_embedding(input_x, None)
# print("patch4 embd input_x.shape = ", outputs.shape)
outputs = self.gpt2(inputs_embeds=outputs).last_hidden_state
# print("patch5 gpt2 embd input_x.shape = ", outputs.shape)
outputs = self.act(outputs).reshape(B, -1)
outputs = self.ln_proj(outputs)
outputs = self.out_layer(outputs)
return outputs
================================================
FILE: ts_classification_methods/gpt4ts/models/loss.py
================================================
import torch
import torch.nn as nn
from torch.nn import functional as F
def get_loss_module(config):
task = config['task']
if (task == "imputation") or (task == "transduction"):
return MaskedMSELoss(reduction='none') # outputs loss for each batch element
if task == "classification":
return NoFussCrossEntropyLoss(reduction='none') # outputs loss for each batch sample
if task == "regression":
return nn.MSELoss(reduction='none') # outputs loss for each batch sample
else:
raise ValueError("Loss module for task '{}' does not exist".format(task))
def l2_reg_loss(model):
"""Returns the squared L2 norm of output layer of given model"""
for name, param in model.named_parameters():
if name == 'output_layer.weight':
return torch.sum(torch.square(param))
class NoFussCrossEntropyLoss(nn.CrossEntropyLoss):
"""
pytorch's CrossEntropyLoss is fussy: 1) needs Long (int64) targets only, and 2) only 1D.
This function satisfies these requirements
"""
def forward(self, inp, target):
return F.cross_entropy(inp, target.long().squeeze(), weight=self.weight,
ignore_index=self.ignore_index, reduction=self.reduction)
class MaskedMSELoss(nn.Module):
""" Masked MSE Loss
"""
def __init__(self, reduction: str = 'mean'):
super().__init__()
self.reduction = reduction
self.mse_loss = nn.MSELoss(reduction=self.reduction)
def forward(self,
y_pred: torch.Tensor, y_true: torch.Tensor, mask: torch.BoolTensor) -> torch.Tensor:
"""Compute the loss between a target value and a prediction.
Args:
y_pred: Estimated values
y_true: Target values
mask: boolean tensor with 0s at places where values should be ignored and 1s where they should be considered
Returns
-------
if reduction == 'none':
(num_active,) Loss for each active batch element as a tensor with gradient attached.
if reduction == 'mean':
scalar mean loss over batch as a tensor with gradient attached.
"""
# for this particular loss, one may also elementwise multiply y_pred and y_true with the inverted mask
masked_pred = torch.masked_select(y_pred, mask)
masked_true = torch.masked_select(y_true, mask)
return self.mse_loss(masked_pred, masked_true)
================================================
FILE: ts_classification_methods/gpt4ts/scripts/generator_gpt4ts.py
================================================
uea_all = ['ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions', 'CharacterTrajectories',
'Cricket', 'DuckDuckGeese', 'EigenWorms', 'Epilepsy', 'EthanolConcentration', 'ERing',
'FaceDetection', 'FingerMovements', 'HandMovementDirection', 'Handwriting',
'Heartbeat', 'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PenDigits', 'PEMS-SF', 'PhonemeSpectra', 'RacketSports', 'SelfRegulationSCP1',
'SelfRegulationSCP2', 'SpokenArabicDigits', 'StandWalkJump', 'UWaveGestureLibrary']
ucr_dataset = ['ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY', 'AllGestureWiimoteZ', 'ArrowHead', 'BME',
'Beef',
'BeetleFly', 'BirdChicken', 'CBF', 'Car', 'Chinatown', 'ChlorineConcentration', 'CinCECGTorso', 'Coffee',
'Computers',
'CricketX', 'CricketY', 'CricketZ', 'Crop', 'DiatomSizeReduction', 'DistalPhalanxOutlineAgeGroup',
'DistalPhalanxOutlineCorrect', 'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame', 'DodgerLoopWeekend',
'ECG200', 'ECG5000', 'ECGFiveDays', 'EOGHorizontalSignal', 'EOGVerticalSignal', 'Earthquakes',
'ElectricDevices',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords', 'Fish', 'FordA', 'FordB',
'FreezerRegularTrain',
'FreezerSmallTrain', 'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3', 'GesturePebbleZ1',
'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan', 'GunPointMaleVersusFemale', 'GunPointOldVersusYoung',
'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate', 'InsectEPGRegularTrain',
'InsectEPGSmallTrain',
'InsectWingbeatSound', 'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2', 'Lightning7',
'Mallat', 'Meat',
'MedicalImages', 'MelbournePedestrian', 'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain', 'MoteStrain',
'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OSULeaf', 'OliveOil', 'PLAID', 'PhalangesOutlinesCorrect', 'Phoneme',
'PickupGestureWiimoteZ', 'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'Plane', 'PowerCons',
'ProximalPhalanxOutlineAgeGroup', 'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices',
'Rock', 'ScreenType', 'SemgHandGenderCh2', 'SemgHandMovementCh2', 'SemgHandSubjectCh2',
'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace', 'SonyAIBORobotSurface1',
'SonyAIBORobotSurface2', 'StarLightCurves', 'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG', 'TwoPatterns', 'UMD',
'UWaveGestureLibraryAll',
'UWaveGestureLibraryX', 'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine', 'WordSynonyms',
'Worms',
'WormsTwoClass', 'Yoga']
dataset_list = list(dataset_list)
print("len = ", len(dataset_list))
# dataset_list = ['HouseTwenty']
code_main = 'main_gpt4ts_ucr' ### main_gpt4ts multi_hydra_uea_test multi_rocket_uea_test multi_hydra_ucr_test multi_rocket_ucr_test
================================================
FILE: ts_classification_methods/model/__init__.py
================================================
from .tsm_model import *
================================================
FILE: ts_classification_methods/model/loss.py
================================================
import torch.nn as nn
def cross_entropy():
loss = nn.CrossEntropyLoss()
return loss
def reconstruction_loss():
loss = nn.MSELoss()
return loss
================================================
FILE: ts_classification_methods/model/tsm_model.py
================================================
import torch
import torch.nn as nn
import torch.nn.utils as utils
# (B, C, T) -> (B, C, T-s)
class Chomp1d(nn.Module):
def __init__(self, chomp_size):
super(Chomp1d, self).__init__()
self.chomp_size = chomp_size
def forward(self, x):
return x[:, :, :-self.chomp_size]
class SqueezeChannels(nn.Module):
def __init__(self):
super(SqueezeChannels, self).__init__()
def forward(self, x):
return x.squeeze(2)
class FCN(nn.Module):
def __init__(self, num_classes, input_size=1):
super(FCN, self).__init__()
self.num_classes = num_classes
self.conv_block1 = nn.Sequential(
nn.Conv1d(in_channels=input_size, out_channels=128,
kernel_size=8, padding='same'),
nn.BatchNorm1d(128),
nn.ReLU()
)
self.conv_block2 = nn.Sequential(
nn.Conv1d(in_channels=128, out_channels=256,
kernel_size=5, padding='same'),
nn.BatchNorm1d(256),
nn.ReLU()
)
self.conv_block3 = nn.Sequential(
nn.Conv1d(in_channels=256, out_channels=128,
kernel_size=3, padding='same'),
nn.BatchNorm1d(128),
nn.ReLU()
)
self.network = nn.Sequential(
self.conv_block1,
self.conv_block2,
self.conv_block3,
nn.AdaptiveAvgPool1d(1),
SqueezeChannels(),
)
def forward(self, x, vis=False):
if vis:
with torch.no_grad():
vis_out = self.conv_block1(x)
vis_out = self.conv_block2(vis_out)
vis_out = self.conv_block3(vis_out)
return self.network(x), vis_out
return self.network(x)
class DilatedBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, dilation, final=False):
super(DilatedBlock, self).__init__()
padding = (kernel_size - 1) * dilation
self.conv_block1 = nn.Sequential(
utils.weight_norm(nn.Conv1d(in_channels, out_channels, kernel_size,
padding=padding, dilation=dilation)),
Chomp1d(padding),
nn.LeakyReLU()
)
self.conv_block2 = nn.Sequential(
utils.weight_norm(nn.Conv1d(out_channels, out_channels, kernel_size,
padding=padding, dilation=dilation)),
Chomp1d(padding),
nn.LeakyReLU()
)
# whether apply residual connection
self.upordownsample = torch.nn.Conv1d(
in_channels, out_channels, 1
) if in_channels != out_channels else None
self.relu = torch.nn.LeakyReLU() if final else None
def forward(self, x):
out = self.conv_block1(x)
out = self.conv_block2(out)
res = x if self.upordownsample is None else self.upordownsample(x)
if self.relu is None:
return out + res
else:
return self.relu(out + res)
class DilatedConvolution(nn.Module):
def __init__(self, in_channels, embedding_channels, out_channels, depth, reduced_size, kernel_size,
num_classes) -> None:
super(DilatedConvolution, self).__init__()
layers = []
# dilation size will be doubled at each step according to TLoss
dilation_size = 1
for i in range(depth):
block_in_channels = in_channels if i == 0 else embedding_channels
layers += [DilatedBlock(block_in_channels,
embedding_channels, kernel_size, dilation_size)]
dilation_size *= 2
layers += [DilatedBlock(embedding_channels, reduced_size,
kernel_size, dilation_size, final=True)]
self.global_average_pool = nn.AdaptiveAvgPool1d(1)
# 注意, dilated中用的是global max pool
self.network = nn.Sequential(*layers,
nn.AdaptiveMaxPool1d(1),
SqueezeChannels(),
nn.Linear(reduced_size, out_channels),
)
def forward(self, x, vis=False):
if vis:
with torch.no_grad():
return self.network(x), nn.Sequential(*self.layers)(x)
return self.network(x)
class DilatedConvolutionVis(nn.Module):
def __init__(self, in_channels, embedding_channels, out_channels, depth, reduced_size, kernel_size,
num_classes) -> None:
super(DilatedConvolutionVis, self).__init__()
self.layers = []
# dilation size will be doubled at each step according to TLoss
dilation_size = 1
for i in range(depth):
block_in_channels = in_channels if i == 0 else embedding_channels
self.layers += [DilatedBlock(block_in_channels,
embedding_channels, kernel_size, dilation_size)]
dilation_size *= 2
self.layers += [DilatedBlock(embedding_channels, reduced_size,
kernel_size, dilation_size, final=True)]
self.global_average_pool = nn.AdaptiveAvgPool1d(1)
# 注意, dilated中用的是global max pool
self.network = nn.Sequential(*self.layers,
nn.AdaptiveMaxPool1d(1),
SqueezeChannels(),
# nn.Linear(reduced_size, out_channels),
)
def forward(self, x, vis=False):
if vis:
with torch.no_grad():
return self.network(x), nn.Sequential(*self.layers)(x)
return self.network(x)
class Classifier(nn.Module):
def __init__(self, input_dims, output_dims) -> None:
super(Classifier, self).__init__()
self.dense = nn.Linear(input_dims, output_dims)
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
return self.softmax(self.dense(x))
class NonLinearClassifier(nn.Module):
def __init__(self, input_dim, embedding_dim, output_dim, dropout=0.2) -> None:
super(NonLinearClassifier, self).__init__()
self.net = nn.Sequential(
nn.Linear(input_dim, embedding_dim),
nn.BatchNorm1d(embedding_dim),
nn.ReLU(),
nn.Dropout(dropout),
nn.Linear(embedding_dim, output_dim),
nn.Softmax(dim=1)
)
def forward(self, x):
return self.net(x)
class NonLinearClassifierVis(nn.Module):
def __init__(self, input_dim, embedding_dim, output_dim, dropout=0.2) -> None:
super(NonLinearClassifierVis, self).__init__()
self.dense = nn.Linear(input_dim, embedding_dim)
self.batchnorm = nn.BatchNorm1d(embedding_dim)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(dropout)
self.dense2 = nn.Linear(embedding_dim, output_dim)
self.net = nn.Sequential(
self.dense,
self.batchnorm,
self.relu,
self.dropout,
self.dense2,
nn.Softmax(dim=1)
)
def forward(self, x, vis=False):
if vis:
with torch.no_grad():
x_out = self.dense(x)
x_out = self.batchnorm(x_out)
x_out = self.relu(x_out)
x_out = self.dropout(x_out)
return self.net(x), x_out
return self.net(x)
# for single step
class RNNDecoder(nn.Module):
def __init__(self, input_dim=1, embedding_dim=128) -> None:
super(RNNDecoder, self).__init__()
self.grucell1 = nn.GRUCell(
input_size=input_dim, hidden_size=embedding_dim)
self.grucell2 = nn.GRUCell(
input_size=embedding_dim, hidden_size=embedding_dim)
self.grucell3 = nn.GRUCell(
input_size=embedding_dim, hidden_size=embedding_dim)
self.linear = nn.Linear(in_features=embedding_dim, out_features=input_dim)
# x : single time step (batch_size, 1)
# TODO 是否将训练循环改到train.py中
def forward(self, h1, h2, h3, x):
hidden1 = self.grucell1(x, h1)
hidden2 = self.grucell2(hidden1, h2)
hidden3 = self.grucell3(hidden2, h3)
out = self.linear(hidden3)
return hidden1, hidden2, hidden3, out
def conv_out_len(seq_len, ker_size, stride, dilation, stack):
i = 0
for _ in range(stack):
seq_len = int(
(seq_len + (ker_size[i] - 1) - dilation * (ker_size[i] - 1) - 1) / stride + 1)
i = i + 1
return seq_len
class FCNDecoder(nn.Module):
# The formula for padding='SAME',padding = (kernel_size - 1) / 2
# Ref: https://blog.csdn.net/crystal_sugar/article/details/105547838, http://www.itsnl.cn/16590.html
def __init__(self, num_classes, seq_len=None, input_size=None):
super(FCNDecoder, self).__init__()
self.num_classes = num_classes
self.compressed_len = conv_out_len(seq_len=seq_len, ker_size=[
3, 5, 7], stride=1, dilation=1, stack=3)
self.conv_trans_block1 = nn.Sequential(
nn.ConvTranspose1d(in_channels=128, out_channels=128,
kernel_size=3, padding=1, output_padding=0),
nn.BatchNorm1d(128),
nn.ReLU()
)
self.conv_trans_block2 = nn.Sequential(
nn.ConvTranspose1d(in_channels=128, out_channels=256,
kernel_size=5, padding=2, output_padding=0),
nn.BatchNorm1d(256),
nn.ReLU()
)
self.conv_trans_block3 = nn.Sequential(
nn.ConvTranspose1d(in_channels=256, out_channels=128,
kernel_size=7, padding=3, output_padding=0),
nn.BatchNorm1d(128),
nn.ReLU()
)
self.network = nn.Sequential(
self.conv_trans_block1,
self.conv_trans_block2,
self.conv_trans_block3,
)
self.upsample = nn.Linear(1, self.compressed_len)
self.conv1x1 = nn.Conv1d(128, input_size, 1)
def forward(self, x):
if len(x.shape) == 2:
x = x.unsqueeze(2)
x = self.upsample(x)
x = self.network(x)
x = self.conv1x1(x)
return x
if __name__ == '__main__':
pass
# TODO
# add args(depth, in_channels, out_channels, reduced_size, embedding_channels, kernel_size in train.py
# finish dataloader.py
================================================
FILE: ts_classification_methods/patchtst/__init__.py
================================================
================================================
FILE: ts_classification_methods/patchtst/main_patchtst_iota.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import argparse
import time
import numpy as np
import torch
from torch.utils.data import DataLoader
from gpt4ts.gpt4ts_utils import load_UEA, normalize_uea_set, UEADataset, save_cls_new_result, set_seed, fill_nan_value, get_all_datasets, build_loss
from gpt4ts.models.gpt4ts import gpt4ts
from patchtst.models.patchTST import PatchTST
from patchtst.patch_mask import PatchCB
def create_patch(xb, patch_len, stride):
"""
xb: [bs x seq_len x n_vars]
"""
seq_len = xb.shape[1]
num_patch = (max(seq_len, patch_len) - patch_len) // stride + 1
tgt_len = patch_len + stride * (num_patch - 1)
s_begin = seq_len - tgt_len
xb = xb[:, s_begin:, :] # xb: [bs x tgt_len x nvars]
xb = xb.unfold(dimension=1, size=patch_len, step=stride) # xb: [bs x num_patch x n_vars x patch_len]
return xb, num_patch
def evaluate_gpt4ts(args, val_loader, model, loss):
val_loss = 0
val_accu = 0
sum_len = 0
for data, target in val_loader:
'''
data, target = data.to(device), target.to(device)
target = target.to(torch.int64)
'''
with torch.no_grad():
xb, num_patch = create_patch(xb=data.permute(0, 2, 1), patch_len=args.patch_len, stride=args.stride)
val_pred = model(xb)
val_loss += loss(val_pred, target).item()
val_accu += torch.sum(torch.argmax(val_pred.data, axis=1) == target)
sum_len += len(target)
return val_loss / sum_len, val_accu / sum_len
if __name__ == '__main__': ##
parser = argparse.ArgumentParser()
# Base setup
parser.add_argument('--backbone', type=str, default='fcn', help='encoder backbone, fcn')
parser.add_argument('--random_seed', type=int, default=42, help='shuffle seed')
# Dataset setup
parser.add_argument('--dataset', type=str, default='LSST',
help='dataset(in ucr)') # LSST Heartbeat Images
# parser.add_argument('--dataroot', type=str, default='../UCRArchive_2018', help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/dev_data/lz/time_series_pretrain/datasets/UCRArchive_2018',
# help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/SSD/lz/UCRArchive_2018', help='path of UCR folder')
parser.add_argument('--dataroot', type=str, default='/dev_data/lz/Multivariate2018_arff', help='path of UEA folder')
parser.add_argument('--num_classes', type=int, default=0, help='number of class')
parser.add_argument('--normalize_way', type=str, default='single', help='single or train_set')
parser.add_argument('--seq_len', type=int, default=46, help='seq_len')
parser.add_argument('--input_size', type=int, default=1, help='input_size')
# parser.add_argument('--patch_size', type=int, default=8, help='patch_size')
# parser.add_argument('--stride', type=int, default=8, help='stride')
parser.add_argument('--target_points', type=int, default=96, help='forecast horizon')
# Patch
parser.add_argument('--patch_len', type=int, default=8, help='patch length')
parser.add_argument('--stride', type=int, default=8, help='stride between patch')
# RevIN
parser.add_argument('--revin', type=int, default=1, help='reversible instance normalization')
# Model args
parser.add_argument('--n_layers', type=int, default=3, help='number of Transformer layers')
parser.add_argument('--n_heads', type=int, default=16, help='number of Transformer heads')
parser.add_argument('--d_model', type=int, default=128, help='Transformer d_model')
parser.add_argument('--d_ff', type=int, default=256, help='Tranformer MLP dimension')
parser.add_argument('--dropout', type=float, default=0.2, help='Transformer dropout')
parser.add_argument('--head_dropout', type=float, default=0, help='head dropout')
# Semi training
parser.add_argument('--labeled_ratio', type=float, default='0.1', help='0.1, 0.2, 0.4')
# training setup
parser.add_argument('--loss', type=str, default='cross_entropy', help='loss function')
parser.add_argument('--optimizer', type=str, default='adam', help='optimizer')
parser.add_argument('--lr', type=float, default=0.001, help='learning rate')
parser.add_argument('--weight_decay', type=float, default=0.0, help='weight decay')
parser.add_argument('--batch_size', type=int, default=128, help='')
parser.add_argument('--epoch', type=int, default=100, help='training epoch')
parser.add_argument('--cuda', type=str, default='cuda:0')
parser.add_argument('--save_dir', type=str, default='/dev_data/lz/time_series_label_noise/result')
parser.add_argument('--save_csv_name', type=str, default='patchtst_supervised_patch8_1224_')
# classifier setup
parser.add_argument('--classifier', type=str, default='linear', help='type of classifier(linear or nonlinear)')
parser.add_argument('--classifier_input', type=int, default=128, help='input dim of the classifiers')
args = parser.parse_args()
device = torch.device(args.cuda if torch.cuda.is_available() else "cpu")
set_seed(args)
# sum_dataset, sum_target, num_classes = build_dataset(args)
sum_dataset, sum_target, num_classes = load_UEA(args.dataroot, args.dataset)
# args.num_classes = num_classes
# args.seq_len = sum_dataset.shape[1]
args.num_classes = num_classes
args.seq_len = sum_dataset.shape[1]
args.input_size = sum_dataset.shape[2]
# get number of patches
num_patch = (max(args.seq_len, args.patch_len) - args.patch_len) // args.stride + 1
print('number of patches:', num_patch)
while sum_dataset.shape[0] * 0.6 < args.batch_size:
args.batch_size = args.batch_size // 2
print("args.batch_size = ", args.batch_size, ", sum_dataset.shape = ", sum_dataset.shape)
# get model
model = PatchTST(c_in=args.input_size,
target_dim=args.target_points,
patch_len=args.patch_len,
stride=args.stride,
num_patch=num_patch,
n_layers=args.n_layers,
n_heads=args.n_heads,
d_model=args.d_model,
shared_embedding=True,
d_ff=args.d_ff,
dropout=args.dropout,
head_dropout=args.head_dropout,
act='relu',
head_type='classification',
res_attention=False
)
# model = gpt4ts(max_seq_len=args.seq_len, num_classes=args.num_classes, var_len=args.input_size, patch_size=args.patch_size, stride=args.stride)
model = model.to(device)
# model, classifier = build_model(args)
# model, classifier = model.to(device), classifier.to(device)
loss = build_loss(args).to(device)
model_init_state = model.state_dict()
# classifier_init_state = classifier.state_dict()
if args.optimizer == 'adam':
optimizer = torch.optim.Adam([{'params': model.parameters()}],
lr=args.lr, weight_decay=args.weight_decay)
elif args.optimizer == 'sgd':
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
losses = []
test_accuracies = []
train_time = 0.0
end_val_epochs = []
for i, train_dataset in enumerate(train_datasets):
t = time.time()
model.load_state_dict(model_init_state)
# classifier.load_state_dict(classifier_init_state)
print('{} fold start training and evaluate'.format(i))
train_target = train_targets[i]
val_dataset = val_datasets[i]
val_target = val_targets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
if args.normalize_way == 'single':
# TODO normalize per series
train_dataset = normalize_uea_set(train_dataset)
val_dataset = normalize_uea_set(val_dataset)
test_dataset = normalize_uea_set(test_dataset)
# else:
# train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
# test_dataset)
train_set = UEADataset(torch.from_numpy(train_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(train_target).type(torch.FloatTensor).to(device).to(torch.int64))
val_set = UEADataset(torch.from_numpy(val_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(val_target).type(torch.FloatTensor).to(device).to(torch.int64))
test_set = UEADataset(torch.from_numpy(test_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(test_target).type(torch.FloatTensor).to(device).to(torch.int64))
train_loader = DataLoader(train_set, batch_size=args.batch_size, num_workers=0, drop_last=True)
val_loader = DataLoader(val_set, batch_size=args.batch_size, num_workers=0)
test_loader = DataLoader(test_set, batch_size=args.batch_size, num_workers=0)
train_loss = []
train_accuracy = []
num_steps = args.epoch // args.batch_size
last_loss = float('inf')
stop_count = 0
increase_count = 0
num_steps = train_set.__len__() // args.batch_size
min_val_loss = float('inf')
test_accuracy = 0
end_val_epoch = 0
for epoch in range(args.epoch):
if stop_count == 80 or increase_count == 80:
print('model convergent at epoch {}, early stopping'.format(epoch))
break
epoch_train_loss = 0
epoch_train_acc = 0
num_iterations = 0
model.train()
train_embed = []
for x, y in train_loader:
optimizer.zero_grad()
# print("raw x.shape = ", x.shape)
xb, num_patch = create_patch(xb=x.permute(0,2,1), patch_len=args.patch_len, stride=args.stride)
# print("patch xb.shape = ", xb.shape)
pred = model(xb)
step_loss = loss(pred, y)
# step_loss.backward(retain_graph=True)
step_loss.backward()
optimizer.step()
epoch_train_loss += step_loss.item()
epoch_train_acc += torch.sum(torch.argmax(pred.data, axis=1) == y) / len(y)
num_iterations += 1
epoch_train_loss /= num_steps
epoch_train_acc /= num_steps
# train_embed = np.concatenate(train_embed)
model.eval()
val_loss, val_accu = evaluate_gpt4ts(args, val_loader, model, loss)
if min_val_loss > val_loss:
min_val_loss = val_loss
end_val_epoch = epoch
test_loss, test_accuracy = evaluate_gpt4ts(args, test_loader, model, loss)
if abs(last_loss - val_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if val_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = val_loss
if epoch % 50 == 0:
print(
"epoch : {}, train loss: {} , train accuracy : {}, \ntest_accuracy : {}".format(
epoch, epoch_train_loss, epoch_train_acc, test_accuracy))
test_accuracies.append(test_accuracy)
end_val_epochs.append(end_val_epoch)
t = time.time() - t
train_time += t
print('{} fold finish training'.format(i))
test_accuracies = torch.Tensor(test_accuracies)
print("Training end: mean_test_acc = ", round(torch.mean(test_accuracies).item(), 4),
"traning time (seconds) = ",
round(train_time, 4), ", seed = ", args.random_seed)
test_accuracies = test_accuracies.cpu().numpy()
save_cls_new_result(args, np.mean(test_accuracies), np.max(test_accuracies), np.min(test_accuracies),
np.std(test_accuracies), train_time)
print('Done!')
================================================
FILE: ts_classification_methods/patchtst/main_patchtst_ucr.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import argparse
import time
import numpy as np
import torch
from torch.utils.data import DataLoader
from gpt4ts.gpt4ts_utils import load_UEA, normalize_uea_set, UEADataset, save_cls_new_result, set_seed, fill_nan_value, get_all_datasets, build_loss, build_dataset
from gpt4ts.models.gpt4ts import gpt4ts
from patchtst.models.patchTST import PatchTST
from patchtst.patch_mask import PatchCB
def create_patch(xb, patch_len, stride):
"""
xb: [bs x seq_len x n_vars]
"""
seq_len = xb.shape[1]
num_patch = (max(seq_len, patch_len) - patch_len) // stride + 1
tgt_len = patch_len + stride * (num_patch - 1)
s_begin = seq_len - tgt_len
xb = xb[:, s_begin:, :] # xb: [bs x tgt_len x nvars]
xb = xb.unfold(dimension=1, size=patch_len, step=stride) # xb: [bs x num_patch x n_vars x patch_len]
return xb, num_patch
def evaluate_gpt4ts(args, val_loader, model, loss):
val_loss = 0
val_accu = 0
sum_len = 0
for data, target in val_loader:
'''
data, target = data.to(device), target.to(device)
target = target.to(torch.int64)
'''
with torch.no_grad():
xb, num_patch = create_patch(xb=data.permute(0, 2, 1), patch_len=args.patch_len, stride=args.stride)
val_pred = model(xb)
val_loss += loss(val_pred, target).item()
val_accu += torch.sum(torch.argmax(val_pred.data, axis=1) == target)
sum_len += len(target)
return val_loss / sum_len, val_accu / sum_len
if __name__ == '__main__': ##
parser = argparse.ArgumentParser()
# Base setup
parser.add_argument('--backbone', type=str, default='fcn', help='encoder backbone, fcn')
parser.add_argument('--random_seed', type=int, default=42, help='shuffle seed')
# Dataset setup UCR, PatchTST: ['Beef', 'Ham']
parser.add_argument('--dataset', type=str, default='Ham',
help='dataset(in ucr)') # LSST Heartbeat Images # Trace, TwoPatterns, UWaveGestureLibraryAll
# parser.add_argument('--dataroot', type=str, default='../UCRArchive_2018', help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/dev_data/lz/time_series_pretrain/datasets/UCRArchive_2018',
# help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/SSD/lz/UCRArchive_2018', help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/SSD/lz/Multivariate2018_arff', help='path of UEA folder')
# parser.add_argument('--dataroot', type=str, default='/SSD/lz/UCRArchive_2018',
# help='path of UCR folder')
parser.add_argument('--dataroot', type=str, default='/SSD/lz/UCRArchive_2018',
help='path of UCR folder')
parser.add_argument('--num_classes', type=int, default=0, help='number of class')
parser.add_argument('--normalize_way', type=str, default='single', help='single or train_set')
parser.add_argument('--seq_len', type=int, default=46, help='seq_len')
parser.add_argument('--input_size', type=int, default=1, help='input_size')
# parser.add_argument('--patch_size', type=int, default=8, help='patch_size')
# parser.add_argument('--stride', type=int, default=8, help='stride')
parser.add_argument('--target_points', type=int, default=96, help='forecast horizon')
# Patch
parser.add_argument('--patch_len', type=int, default=8, help='patch length')
parser.add_argument('--stride', type=int, default=8, help='stride between patch')
# RevIN
parser.add_argument('--revin', type=int, default=1, help='reversible instance normalization')
# Model args
parser.add_argument('--n_layers', type=int, default=3, help='number of Transformer layers')
parser.add_argument('--n_heads', type=int, default=16, help='number of Transformer heads')
parser.add_argument('--d_model', type=int, default=128, help='Transformer d_model')
parser.add_argument('--d_ff', type=int, default=256, help='Tranformer MLP dimension')
parser.add_argument('--dropout', type=float, default=0.2, help='Transformer dropout')
parser.add_argument('--head_dropout', type=float, default=0, help='head dropout')
# Semi training
parser.add_argument('--labeled_ratio', type=float, default='0.1', help='0.1, 0.2, 0.4')
# training setup
parser.add_argument('--loss', type=str, default='cross_entropy', help='loss function')
parser.add_argument('--optimizer', type=str, default='adam', help='optimizer')
parser.add_argument('--lr', type=float, default=0.001, help='learning rate')
parser.add_argument('--weight_decay', type=float, default=0.0, help='weight decay')
parser.add_argument('--batch_size', type=int, default=8, help='')
parser.add_argument('--epoch', type=int, default=100, help='training epoch')
parser.add_argument('--cuda', type=str, default='cuda:0')
# parser.add_argument('--save_dir', type=str, default='/SSD/lz/time_series_label_noise/result')
parser.add_argument('--save_dir', type=str, default='/SSD/lz/time_series_label_noise/result')
parser.add_argument('--save_csv_name', type=str, default='patchtst_supervised_240731_')
# classifier setup
parser.add_argument('--classifier', type=str, default='linear', help='type of classifier(linear or nonlinear)')
parser.add_argument('--classifier_input', type=int, default=128, help='input dim of the classifiers')
args = parser.parse_args()
device = torch.device(args.cuda if torch.cuda.is_available() else "cpu")
set_seed(args)
# sum_dataset, sum_target, num_classes = build_dataset(args)
# sum_dataset, sum_target, num_classes = load_UEA(args.dataroot, args.dataset)
# args.num_classes = num_classes
# args.seq_len = sum_dataset.shape[1]
sum_dataset, sum_target, num_classes = build_dataset(args)
# args.num_classes = num_classes
# x_train_labeled = x_train_few[:, np.newaxis, :]
# x_val_labeled = val_dataset[:, np.newaxis, :]
# x_test_labeled = test_dataset[:, np.newaxis, :]
sum_dataset = sum_dataset[:, :, np.newaxis]
args.num_classes = num_classes
args.seq_len = sum_dataset.shape[1]
args.input_size = sum_dataset.shape[2]
# get number of patches
num_patch = (max(args.seq_len, args.patch_len) - args.patch_len) // args.stride + 1
print('number of patches:', num_patch)
while sum_dataset.shape[0] * 0.6 < args.batch_size:
args.batch_size = args.batch_size // 2
print("args.batch_size = ", args.batch_size, ", sum_dataset.shape = ", sum_dataset.shape)
# get model
model = PatchTST(c_in=args.input_size,
target_dim=args.target_points,
patch_len=args.patch_len,
stride=args.stride,
num_patch=num_patch,
n_layers=args.n_layers,
n_heads=args.n_heads,
d_model=args.d_model,
shared_embedding=True,
d_ff=args.d_ff,
dropout=args.dropout,
head_dropout=args.head_dropout,
act='relu',
head_type='classification',
res_attention=False
)
# model = gpt4ts(max_seq_len=args.seq_len, num_classes=args.num_classes, var_len=args.input_size, patch_size=args.patch_size, stride=args.stride)
model = model.to(device)
# model, classifier = build_model(args)
# model, classifier = model.to(device), classifier.to(device)
loss = build_loss(args).to(device)
model_init_state = model.state_dict()
# classifier_init_state = classifier.state_dict()
if args.optimizer == 'adam':
optimizer = torch.optim.Adam([{'params': model.parameters()}],
lr=args.lr, weight_decay=args.weight_decay)
elif args.optimizer == 'sgd':
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
losses = []
test_accuracies = []
train_time = 0.0
end_val_epochs = []
for i, train_dataset in enumerate(train_datasets):
t = time.time()
model.load_state_dict(model_init_state)
# classifier.load_state_dict(classifier_init_state)
print('{} fold start training and evaluate'.format(i))
train_target = train_targets[i]
val_dataset = val_datasets[i]
val_target = val_targets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
if args.normalize_way == 'single':
# TODO normalize per series
train_dataset = normalize_uea_set(train_dataset)
val_dataset = normalize_uea_set(val_dataset)
test_dataset = normalize_uea_set(test_dataset)
# else:
# train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
# test_dataset)
train_set = UEADataset(torch.from_numpy(train_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(train_target).type(torch.FloatTensor).to(device).to(torch.int64))
val_set = UEADataset(torch.from_numpy(val_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(val_target).type(torch.FloatTensor).to(device).to(torch.int64))
test_set = UEADataset(torch.from_numpy(test_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(test_target).type(torch.FloatTensor).to(device).to(torch.int64))
train_loader = DataLoader(train_set, batch_size=args.batch_size, num_workers=0, drop_last=True)
val_loader = DataLoader(val_set, batch_size=args.batch_size, num_workers=0)
test_loader = DataLoader(test_set, batch_size=args.batch_size, num_workers=0)
train_loss = []
train_accuracy = []
num_steps = args.epoch // args.batch_size
last_loss = float('inf')
stop_count = 0
increase_count = 0
num_steps = train_set.__len__() // args.batch_size
min_val_loss = float('inf')
test_accuracy = 0
end_val_epoch = 0
for epoch in range(args.epoch):
if stop_count == 50 or increase_count == 50:
print('model convergent at epoch {}, early stopping'.format(epoch))
break
epoch_train_loss = 0
epoch_train_acc = 0
num_iterations = 0
model.train()
train_embed = []
for x, y in train_loader:
optimizer.zero_grad()
# print("raw x.shape = ", x.shape)
xb, num_patch = create_patch(xb=x.permute(0,2,1), patch_len=args.patch_len, stride=args.stride)
# print("patch xb.shape = ", xb.shape)
pred = model(xb)
step_loss = loss(pred, y)
# step_loss.backward(retain_graph=True)
step_loss.backward()
optimizer.step()
epoch_train_loss += step_loss.item()
epoch_train_acc += torch.sum(torch.argmax(pred.data, axis=1) == y) / len(y)
num_iterations += 1
epoch_train_loss /= num_steps
epoch_train_acc /= num_steps
# train_embed = np.concatenate(train_embed)
model.eval()
val_loss, val_accu = evaluate_gpt4ts(args, val_loader, model, loss)
if min_val_loss > val_loss:
min_val_loss = val_loss
end_val_epoch = epoch
test_loss, test_accuracy = evaluate_gpt4ts(args, test_loader, model, loss)
if abs(last_loss - val_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if val_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = val_loss
if epoch % 50 == 0:
print(
"epoch : {}, train loss: {} , train accuracy : {}, \ntest_accuracy : {}".format(
epoch, epoch_train_loss, epoch_train_acc, test_accuracy))
test_accuracies.append(test_accuracy)
end_val_epochs.append(end_val_epoch)
t = time.time() - t
train_time += t
print('{} fold finish training'.format(i))
test_accuracies = torch.Tensor(test_accuracies)
print("Training end: mean_test_acc = ", round(torch.mean(test_accuracies).item(), 4),
"traning time (seconds) = ",
round(train_time, 4), ", seed = ", args.random_seed)
test_accuracies = test_accuracies.cpu().numpy()
save_cls_new_result(args, np.mean(test_accuracies), np.max(test_accuracies), np.min(test_accuracies),
np.std(test_accuracies), train_time)
print('Done!')
================================================
FILE: ts_classification_methods/patchtst/mian_patchtst.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import argparse
import time
import numpy as np
import torch
from torch.utils.data import DataLoader
from gpt4ts.gpt4ts_utils import load_UEA, normalize_uea_set, UEADataset, save_cls_new_result, set_seed, fill_nan_value, get_all_datasets, build_loss
from gpt4ts.models.gpt4ts import gpt4ts
from patchtst.models.patchTST import PatchTST
from patchtst.patch_mask import PatchCB
def create_patch(xb, patch_len, stride):
"""
xb: [bs x seq_len x n_vars]
"""
seq_len = xb.shape[1]
num_patch = (max(seq_len, patch_len) - patch_len) // stride + 1
tgt_len = patch_len + stride * (num_patch - 1)
s_begin = seq_len - tgt_len
xb = xb[:, s_begin:, :] # xb: [bs x tgt_len x nvars]
xb = xb.unfold(dimension=1, size=patch_len, step=stride) # xb: [bs x num_patch x n_vars x patch_len]
return xb, num_patch
def evaluate_gpt4ts(args, val_loader, model, loss):
val_loss = 0
val_accu = 0
sum_len = 0
for data, target in val_loader:
'''
data, target = data.to(device), target.to(device)
target = target.to(torch.int64)
'''
with torch.no_grad():
xb, num_patch = create_patch(xb=data.permute(0, 2, 1), patch_len=args.patch_len, stride=args.stride)
val_pred = model(xb)
val_loss += loss(val_pred, target).item()
val_accu += torch.sum(torch.argmax(val_pred.data, axis=1) == target)
sum_len += len(target)
return val_loss / sum_len, val_accu / sum_len
if __name__ == '__main__': ##
parser = argparse.ArgumentParser()
# Base setup
parser.add_argument('--backbone', type=str, default='fcn', help='encoder backbone, fcn')
parser.add_argument('--random_seed', type=int, default=42, help='shuffle seed')
# Dataset setup UEA, PatchTST: ['DuckDuckGeese', 'EigenWorms', 'MotorImagery', 'PEMS-SF', 'StandWalkJump']
parser.add_argument('--dataset', type=str, default='EigenWorms',
help='dataset(in ucr)') # LSST Heartbeat Images
# parser.add_argument('--dataroot', type=str, default='../UCRArchive_2018', help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/dev_data/lz/time_series_pretrain/datasets/UCRArchive_2018',
# help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/SSD/lz/UCRArchive_2018', help='path of UCR folder')
parser.add_argument('--dataroot', type=str, default='/SSD/lz/Multivariate2018_arff', help='path of UEA folder')
parser.add_argument('--num_classes', type=int, default=0, help='number of class')
parser.add_argument('--normalize_way', type=str, default='single', help='single or train_set')
parser.add_argument('--seq_len', type=int, default=46, help='seq_len')
parser.add_argument('--input_size', type=int, default=1, help='input_size')
# parser.add_argument('--patch_size', type=int, default=8, help='patch_size')
# parser.add_argument('--stride', type=int, default=8, help='stride')
parser.add_argument('--target_points', type=int, default=96, help='forecast horizon')
# Patch
parser.add_argument('--patch_len', type=int, default=8, help='patch length')
parser.add_argument('--stride', type=int, default=8, help='stride between patch')
# RevIN
parser.add_argument('--revin', type=int, default=1, help='reversible instance normalization')
# Model args
parser.add_argument('--n_layers', type=int, default=3, help='number of Transformer layers')
parser.add_argument('--n_heads', type=int, default=16, help='number of Transformer heads')
parser.add_argument('--d_model', type=int, default=128, help='Transformer d_model')
parser.add_argument('--d_ff', type=int, default=256, help='Tranformer MLP dimension')
parser.add_argument('--dropout', type=float, default=0.2, help='Transformer dropout')
parser.add_argument('--head_dropout', type=float, default=0, help='head dropout')
# Semi training
parser.add_argument('--labeled_ratio', type=float, default='0.1', help='0.1, 0.2, 0.4')
# training setup
parser.add_argument('--loss', type=str, default='cross_entropy', help='loss function')
parser.add_argument('--optimizer', type=str, default='adam', help='optimizer')
parser.add_argument('--lr', type=float, default=0.001, help='learning rate')
parser.add_argument('--weight_decay', type=float, default=0.0, help='weight decay')
parser.add_argument('--batch_size', type=int, default=1, help='')
parser.add_argument('--epoch', type=int, default=100, help='training epoch')
parser.add_argument('--cuda', type=str, default='cuda:0')
parser.add_argument('--save_dir', type=str, default='/SSD/lz/time_series_label_noise/result')
parser.add_argument('--save_csv_name', type=str, default='patchtst_uea_supervised_240731_')
# classifier setup
parser.add_argument('--classifier', type=str, default='linear', help='type of classifier(linear or nonlinear)')
parser.add_argument('--classifier_input', type=int, default=128, help='input dim of the classifiers')
args = parser.parse_args()
device = torch.device(args.cuda if torch.cuda.is_available() else "cpu")
set_seed(args)
# sum_dataset, sum_target, num_classes = build_dataset(args)
sum_dataset, sum_target, num_classes = load_UEA(args.dataroot, args.dataset)
# args.num_classes = num_classes
# args.seq_len = sum_dataset.shape[1]
args.num_classes = num_classes
args.seq_len = sum_dataset.shape[1]
args.input_size = sum_dataset.shape[2]
# get number of patches
num_patch = (max(args.seq_len, args.patch_len) - args.patch_len) // args.stride + 1
print('number of patches:', num_patch)
while sum_dataset.shape[0] * 0.6 < args.batch_size:
args.batch_size = args.batch_size // 2
print("args.batch_size = ", args.batch_size, ", sum_dataset.shape = ", sum_dataset.shape)
# get model
model = PatchTST(c_in=args.input_size,
target_dim=args.target_points,
patch_len=args.patch_len,
stride=args.stride,
num_patch=num_patch,
n_layers=args.n_layers,
n_heads=args.n_heads,
d_model=args.d_model,
shared_embedding=True,
d_ff=args.d_ff,
dropout=args.dropout,
head_dropout=args.head_dropout,
act='relu',
head_type='classification',
res_attention=False
)
# model = gpt4ts(max_seq_len=args.seq_len, num_classes=args.num_classes, var_len=args.input_size, patch_size=args.patch_size, stride=args.stride)
model = model.to(device)
# model, classifier = build_model(args)
# model, classifier = model.to(device), classifier.to(device)
loss = build_loss(args).to(device)
model_init_state = model.state_dict()
# classifier_init_state = classifier.state_dict()
if args.optimizer == 'adam':
optimizer = torch.optim.Adam([{'params': model.parameters()}],
lr=args.lr, weight_decay=args.weight_decay)
elif args.optimizer == 'sgd':
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
losses = []
test_accuracies = []
train_time = 0.0
end_val_epochs = []
for i, train_dataset in enumerate(train_datasets):
t = time.time()
model.load_state_dict(model_init_state)
# classifier.load_state_dict(classifier_init_state)
print('{} fold start training and evaluate'.format(i))
train_target = train_targets[i]
val_dataset = val_datasets[i]
val_target = val_targets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
if args.normalize_way == 'single':
# TODO normalize per series
train_dataset = normalize_uea_set(train_dataset)
val_dataset = normalize_uea_set(val_dataset)
test_dataset = normalize_uea_set(test_dataset)
# else:
# train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
# test_dataset)
train_set = UEADataset(torch.from_numpy(train_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(train_target).type(torch.FloatTensor).to(device).to(torch.int64))
val_set = UEADataset(torch.from_numpy(val_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(val_target).type(torch.FloatTensor).to(device).to(torch.int64))
test_set = UEADataset(torch.from_numpy(test_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(test_target).type(torch.FloatTensor).to(device).to(torch.int64))
train_loader = DataLoader(train_set, batch_size=args.batch_size, num_workers=0, drop_last=True)
val_loader = DataLoader(val_set, batch_size=args.batch_size, num_workers=0)
test_loader = DataLoader(test_set, batch_size=args.batch_size, num_workers=0)
train_loss = []
train_accuracy = []
num_steps = args.epoch // args.batch_size
last_loss = float('inf')
stop_count = 0
increase_count = 0
num_steps = train_set.__len__() // args.batch_size
min_val_loss = float('inf')
test_accuracy = 0
end_val_epoch = 0
for epoch in range(args.epoch):
if stop_count == 50 or increase_count == 50:
print('model convergent at epoch {}, early stopping'.format(epoch))
break
epoch_train_loss = 0
epoch_train_acc = 0
num_iterations = 0
model.train()
train_embed = []
for x, y in train_loader:
optimizer.zero_grad()
# print("raw x.shape = ", x.shape)
xb, num_patch = create_patch(xb=x.permute(0,2,1), patch_len=args.patch_len, stride=args.stride)
# print("patch xb.shape = ", xb.shape)
pred = model(xb)
step_loss = loss(pred, y)
# step_loss.backward(retain_graph=True)
step_loss.backward()
optimizer.step()
epoch_train_loss += step_loss.item()
epoch_train_acc += torch.sum(torch.argmax(pred.data, axis=1) == y) / len(y)
num_iterations += 1
epoch_train_loss /= num_steps
epoch_train_acc /= num_steps
# train_embed = np.concatenate(train_embed)
model.eval()
val_loss, val_accu = evaluate_gpt4ts(args, val_loader, model, loss)
if min_val_loss > val_loss:
min_val_loss = val_loss
end_val_epoch = epoch
test_loss, test_accuracy = evaluate_gpt4ts(args, test_loader, model, loss)
if abs(last_loss - val_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if val_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = val_loss
if epoch % 50 == 0:
print(
"epoch : {}, train loss: {} , train accuracy : {}, \ntest_accuracy : {}".format(
epoch, epoch_train_loss, epoch_train_acc, test_accuracy))
test_accuracies.append(test_accuracy)
end_val_epochs.append(end_val_epoch)
t = time.time() - t
train_time += t
print('{} fold finish training'.format(i))
test_accuracies = torch.Tensor(test_accuracies)
print("Training end: mean_test_acc = ", round(torch.mean(test_accuracies).item(), 4),
"traning time (seconds) = ",
round(train_time, 4), ", seed = ", args.random_seed)
test_accuracies = test_accuracies.cpu().numpy()
save_cls_new_result(args, np.mean(test_accuracies), np.max(test_accuracies), np.min(test_accuracies),
np.std(test_accuracies), train_time)
print('Done!')
================================================
FILE: ts_classification_methods/patchtst/models/__init__.py
================================================
================================================
FILE: ts_classification_methods/patchtst/models/attention.py
================================================
import torch
from torch import nn
from torch import Tensor
import torch.nn.functional as F
from typing import Optional
class MultiheadAttention(nn.Module):
def __init__(self, d_model, n_heads, d_k=None, d_v=None, res_attention=False, attn_dropout=0., proj_dropout=0.,
qkv_bias=True, lsa=False):
"""Multi Head Attention Layer
Input shape:
Q: [batch_size (bs) x max_q_len x d_model]
K, V: [batch_size (bs) x q_len x d_model]
mask: [q_len x q_len]
"""
super().__init__()
d_k = d_model // n_heads if d_k is None else d_k
d_v = d_model // n_heads if d_v is None else d_v
self.n_heads, self.d_k, self.d_v = n_heads, d_k, d_v
self.W_Q = nn.Linear(d_model, d_k * n_heads, bias=qkv_bias)
self.W_K = nn.Linear(d_model, d_k * n_heads, bias=qkv_bias)
self.W_V = nn.Linear(d_model, d_v * n_heads, bias=qkv_bias)
# Scaled Dot-Product Attention (multiple heads)
self.res_attention = res_attention
self.sdp_attn = ScaledDotProductAttention(d_model, n_heads, attn_dropout=attn_dropout,
res_attention=self.res_attention, lsa=lsa)
# Poject output
self.to_out = nn.Sequential(nn.Linear(n_heads * d_v, d_model), nn.Dropout(proj_dropout))
def forward(self, Q: Tensor, K: Optional[Tensor] = None, V: Optional[Tensor] = None, prev: Optional[Tensor] = None,
key_padding_mask: Optional[Tensor] = None, attn_mask: Optional[Tensor] = None):
bs = Q.size(0)
if K is None: K = Q
if V is None: V = Q
# Linear (+ split in multiple heads)
q_s = self.W_Q(Q).view(bs, -1, self.n_heads, self.d_k).transpose(1,
2) # q_s : [bs x n_heads x max_q_len x d_k]
k_s = self.W_K(K).view(bs, -1, self.n_heads, self.d_k).permute(0, 2, 3,
1) # k_s : [bs x n_heads x d_k x q_len] - transpose(1,2) + transpose(2,3)
v_s = self.W_V(V).view(bs, -1, self.n_heads, self.d_v).transpose(1, 2) # v_s : [bs x n_heads x q_len x d_v]
# Apply Scaled Dot-Product Attention (multiple heads)
if self.res_attention:
output, attn_weights, attn_scores = self.sdp_attn(q_s, k_s, v_s, prev=prev,
key_padding_mask=key_padding_mask, attn_mask=attn_mask)
else:
output, attn_weights = self.sdp_attn(q_s, k_s, v_s, key_padding_mask=key_padding_mask, attn_mask=attn_mask)
# output: [bs x n_heads x q_len x d_v], attn: [bs x n_heads x q_len x q_len], scores: [bs x n_heads x max_q_len x q_len]
# back to the original inputs dimensions
output = output.transpose(1, 2).contiguous().view(bs, -1,
self.n_heads * self.d_v) # output: [bs x q_len x n_heads * d_v]
output = self.to_out(output)
if self.res_attention:
return output, attn_weights, attn_scores
else:
return output, attn_weights
class ScaledDotProductAttention(nn.Module):
r"""Scaled Dot-Product Attention module (Attention is all you need by Vaswani et al., 2017) with optional residual attention from previous layer
(Realformer: Transformer likes residual attention by He et al, 2020) and locality self sttention (Vision Transformer for Small-Size Datasets
by Lee et al, 2021)"""
def __init__(self, d_model, n_heads, attn_dropout=0., res_attention=False, lsa=False):
super().__init__()
self.attn_dropout = nn.Dropout(attn_dropout)
self.res_attention = res_attention
head_dim = d_model // n_heads
self.scale = nn.Parameter(torch.tensor(head_dim ** -0.5), requires_grad=lsa)
self.lsa = lsa
def forward(self, q: Tensor, k: Tensor, v: Tensor, prev: Optional[Tensor] = None,
key_padding_mask: Optional[Tensor] = None, attn_mask: Optional[Tensor] = None):
'''
Input shape:
q : [bs x n_heads x max_q_len x d_k]
k : [bs x n_heads x d_k x seq_len]
v : [bs x n_heads x seq_len x d_v]
prev : [bs x n_heads x q_len x seq_len]
key_padding_mask: [bs x seq_len]
attn_mask : [1 x seq_len x seq_len]
Output shape:
output: [bs x n_heads x q_len x d_v]
attn : [bs x n_heads x q_len x seq_len]
scores : [bs x n_heads x q_len x seq_len]
'''
# Scaled MatMul (q, k) - similarity scores for all pairs of positions in an input sequence
attn_scores = torch.matmul(q, k) * self.scale # attn_scores : [bs x n_heads x max_q_len x q_len]
# Add pre-softmax attention scores from the previous layer (optional)
if prev is not None: attn_scores = attn_scores + prev
# Attention mask (optional)
if attn_mask is not None: # attn_mask with shape [q_len x seq_len] - only used when q_len == seq_len
if attn_mask.dtype == torch.bool:
attn_scores.masked_fill_(attn_mask, -np.inf)
else:
attn_scores += attn_mask
# Key padding mask (optional)
if key_padding_mask is not None: # mask with shape [bs x q_len] (only when max_w_len == q_len)
attn_scores.masked_fill_(key_padding_mask.unsqueeze(1).unsqueeze(2), -np.inf)
# normalize the attention weights
attn_weights = F.softmax(attn_scores, dim=-1) # attn_weights : [bs x n_heads x max_q_len x q_len]
attn_weights = self.attn_dropout(attn_weights)
# compute the new values given the attention weights
output = torch.matmul(attn_weights, v) # output: [bs x n_heads x max_q_len x d_v]
if self.res_attention:
return output, attn_weights, attn_scores
else:
return output, attn_weights
================================================
FILE: ts_classification_methods/patchtst/models/basics.py
================================================
__all__ = ['Transpose', 'LinBnDrop', 'SigmoidRange', 'sigmoid_range', 'get_activation_fn']
import torch
from torch import nn
class Transpose(nn.Module):
def __init__(self, *dims, contiguous=False):
super().__init__()
self.dims, self.contiguous = dims, contiguous
def forward(self, x):
if self.contiguous:
return x.transpose(*self.dims).contiguous()
else:
return x.transpose(*self.dims)
class SigmoidRange(nn.Module):
def __init__(self, low, high):
super().__init__()
self.low, self.high = low, high
# self.low, self.high = ranges
def forward(self, x):
# return sigmoid_range(x, self.low, self.high)
return torch.sigmoid(x) * (self.high - self.low) + self.low
class LinBnDrop(nn.Sequential):
"Module grouping `BatchNorm1d`, `Dropout` and `Linear` layers"
def __init__(self, n_in, n_out, bn=True, p=0., act=None, lin_first=False):
layers = [nn.BatchNorm2d(n_out if lin_first else n_in, ndim=1)] if bn else []
if p != 0: layers.append(nn.Dropout(p))
lin = [nn.Linear(n_in, n_out, bias=not bn)]
if act is not None: lin.append(act)
layers = lin + layers if lin_first else layers + lin
super().__init__(*layers)
def sigmoid_range(x, low, high):
"Sigmoid function with range `(low, high)`"
return torch.sigmoid(x) * (high - low) + low
def get_activation_fn(activation):
if callable(activation):
return activation()
elif activation.lower() == "relu":
return nn.ReLU()
elif activation.lower() == "gelu":
return nn.GELU()
raise ValueError(f'{activation} is not available. You can use "relu", "gelu", or a callable')
================================================
FILE: ts_classification_methods/patchtst/models/heads.py
================================================
import torch
from torch import nn
class LinearRegressionHead(nn.Module):
def __init__(self, n_vars, d_model, output_dim, head_dropout, y_range=None):
super().__init__()
self.y_range = y_range
self.flatten = nn.Flatten(start_dim=1)
self.dropout = nn.Dropout(head_dropout)
self.linear = nn.Linear(n_vars*d_model, output_dim)
def forward(self, x):
"""
x: [bs x nvars x d_model x num_patch]
output: [bs x output_dim]
"""
x = x[:,:,:,-1] # only consider the last item in the sequence, x: bs x nvars x d_model
x = self.flatten(x) # x: bs x nvars * d_model
x = self.dropout(x)
y = self.linear(x) # y: bs x output_dim
if self.y_range: y = SigmoidRange(*self.y_range)(y)
return y
class LinearClassificationHead(nn.Module):
def __init__(self, n_vars, d_model, n_classes, head_dropout):
super().__init__()
self.flatten = nn.Flatten(start_dim=1)
self.dropout = nn.Dropout(head_dropout)
self.linear = nn.Linear(n_vars*d_model, n_classes)
def forward(self, x):
"""
x: [bs x nvars x d_model x num_patch]
output: [bs x n_classes]
"""
x = x[:,:,:,-1] # only consider the last item in the sequence, x: bs x nvars x d_model
x = self.flatten(x) # x: bs x nvars * d_model
x = self.dropout(x)
y = self.linear(x) # y: bs x n_classes
return y
class LinearPredictionHead(nn.Module):
def __init__(self, individual, n_vars, d_model, num_patch, forecast_len, head_dropout=0):
super().__init__()
self.individual = individual
self.n_vars = n_vars
head_dim = d_model*num_patch
if self.individual:
self.linears = nn.ModuleList()
self.dropouts = nn.ModuleList()
self.flattens = nn.ModuleList()
for i in range(self.n_vars):
self.flattens.append(nn.Flatten(start_dim=-2))
self.linears.append(nn.Linear(head_dim, forecast_len))
self.dropouts.append(nn.Dropout(head_dropout))
else:
self.flatten = nn.Flatten(start_dim=-2)
self.linear = nn.Linear(head_dim, forecast_len)
self.dropout = nn.Dropout(head_dropout)
def forward(self, x):
"""
x: [bs x nvars x d_model x num_patch]
output: [bs x forecast_len x nvars]
"""
if self.individual:
x_out = []
for i in range(self.n_vars):
z = self.flattens[i](x[:,i,:,:]) # z: [bs x d_model * num_patch]
z = self.linears[i](z) # z: [bs x forecast_len]
z = self.dropouts[i](z)
x_out.append(z)
x = torch.stack(x_out, dim=1) # x: [bs x nvars x forecast_len]
else:
x = self.flatten(x)
x = self.dropout(x)
x = self.linear(x)
return x.transpose(2,1) # [bs x forecast_len x nvars]
class LinearPretrainHead(nn.Module):
def __init__(self, d_model, patch_len, dropout):
super().__init__()
self.dropout = nn.Dropout(dropout)
self.linear = nn.Linear(d_model, patch_len)
def forward(self, x):
"""
x: tensor [bs x nvars x d_model x num_patch]
output: tensor [bs x nvars x num_patch x patch_len]
"""
x = x.transpose(2,3) # [bs x nvars x num_patch x d_model]
x = self.linear( self.dropout(x) ) # [bs x nvars x num_patch x patch_len]
x = x.permute(0,2,1,3) # [bs x num_patch x nvars x patch_len]
return x
================================================
FILE: ts_classification_methods/patchtst/models/patchTST.py
================================================
__all__ = ['PatchTST']
from patchtst.models.pos_encoding import *
from patchtst.models.basics import *
from patchtst.models.attention import *
# Cell
class PatchTST(nn.Module):
"""
Output dimension:
[bs x target_dim x nvars] for prediction
[bs x target_dim] for regression
[bs x target_dim] for classification
[bs x num_patch x n_vars x patch_len] for pretrain
"""
def __init__(self, c_in: int, target_dim: int, patch_len: int, stride: int, num_patch: int,
n_layers: int = 3, d_model=128, n_heads=16, shared_embedding=True, d_ff: int = 256,
norm: str = 'BatchNorm', attn_dropout: float = 0., dropout: float = 0., act: str = "gelu",
res_attention: bool = True, pre_norm: bool = False, store_attn: bool = False,
pe: str = 'zeros', learn_pe: bool = True, head_dropout=0,
head_type="prediction", individual=False,
y_range: Optional[tuple] = None, verbose: bool = False, **kwargs):
super().__init__()
assert head_type in ['pretrain', 'prediction', 'regression',
'classification'], 'head type should be either pretrain, prediction, or regression'
# Backbone
self.backbone = PatchTSTEncoder(c_in, num_patch=num_patch, patch_len=patch_len,
n_layers=n_layers, d_model=d_model, n_heads=n_heads,
shared_embedding=shared_embedding, d_ff=d_ff,
attn_dropout=attn_dropout, dropout=dropout, act=act,
res_attention=res_attention, pre_norm=pre_norm, store_attn=store_attn,
pe=pe, learn_pe=learn_pe, verbose=verbose, **kwargs)
# Head
self.n_vars = c_in
self.head_type = head_type
if head_type == "pretrain":
self.head = PretrainHead(d_model, patch_len,
head_dropout) # custom head passed as a partial func with all its kwargs
elif head_type == "prediction":
self.head = PredictionHead(individual, self.n_vars, d_model, num_patch, target_dim, head_dropout)
elif head_type == "regression":
self.head = RegressionHead(self.n_vars, d_model, target_dim, head_dropout, y_range)
elif head_type == "classification":
self.head = ClassificationHead(self.n_vars, d_model, target_dim, head_dropout)
def forward(self, z):
"""
z: tensor [bs x num_patch x n_vars x patch_len]
"""
# print("1 raw z.shape = ", z.shape)
z = self.backbone(z) # z: [bs x nvars x d_model x num_patch]
# print("2 raw z.shape = ", z.shape)
z = self.head(z)
# print("3 raw z.shape = ", z.shape)
# z: [bs x target_dim x nvars] for prediction
# [bs x target_dim] for regression
# [bs x target_dim] for classification
# [bs x num_patch x n_vars x patch_len] for pretrain
return z
class RegressionHead(nn.Module):
def __init__(self, n_vars, d_model, output_dim, head_dropout, y_range=None):
super().__init__()
self.y_range = y_range
self.flatten = nn.Flatten(start_dim=1)
self.dropout = nn.Dropout(head_dropout)
self.linear = nn.Linear(n_vars * d_model, output_dim)
def forward(self, x):
"""
x: [bs x nvars x d_model x num_patch]
output: [bs x output_dim]
"""
x = x[:, :, :, -1] # only consider the last item in the sequence, x: bs x nvars x d_model
x = self.flatten(x) # x: bs x nvars * d_model
x = self.dropout(x)
y = self.linear(x) # y: bs x output_dim
if self.y_range: y = SigmoidRange(*self.y_range)(y)
return y
class ClassificationHead(nn.Module):
def __init__(self, n_vars, d_model, n_classes, head_dropout):
super().__init__()
self.flatten = nn.Flatten(start_dim=1)
self.dropout = nn.Dropout(head_dropout)
self.linear = nn.Linear(n_vars * d_model, n_classes)
def forward(self, x):
"""
x: [bs x nvars x d_model x num_patch]
output: [bs x n_classes]
"""
# print("1 x.shape = ", x.shape)
x = x[:, :, :, -1] # only consider the last item in the sequence, x: bs x nvars x d_model
# print("2 x.shape = ", x.shape)
x = self.flatten(x) # x: bs x nvars * d_model
# print("3 x.shape = ", x.shape)
x = self.dropout(x)
y = self.linear(x) # y: bs x n_classes
return y
class PredictionHead(nn.Module):
def __init__(self, individual, n_vars, d_model, num_patch, forecast_len, head_dropout=0, flatten=False):
super().__init__()
self.individual = individual
self.n_vars = n_vars
self.flatten = flatten
head_dim = d_model * num_patch
if self.individual:
self.linears = nn.ModuleList()
self.dropouts = nn.ModuleList()
self.flattens = nn.ModuleList()
for i in range(self.n_vars):
self.flattens.append(nn.Flatten(start_dim=-2))
self.linears.append(nn.Linear(head_dim, forecast_len))
self.dropouts.append(nn.Dropout(head_dropout))
else:
self.flatten = nn.Flatten(start_dim=-2)
self.linear = nn.Linear(head_dim, forecast_len)
self.dropout = nn.Dropout(head_dropout)
def forward(self, x):
"""
x: [bs x nvars x d_model x num_patch]
output: [bs x forecast_len x nvars]
"""
if self.individual:
x_out = []
for i in range(self.n_vars):
z = self.flattens[i](x[:, i, :, :]) # z: [bs x d_model * num_patch]
z = self.linears[i](z) # z: [bs x forecast_len]
z = self.dropouts[i](z)
x_out.append(z)
x = torch.stack(x_out, dim=1) # x: [bs x nvars x forecast_len]
else:
x = self.flatten(x) # x: [bs x nvars x (d_model * num_patch)]
x = self.dropout(x)
x = self.linear(x) # x: [bs x nvars x forecast_len]
return x.transpose(2, 1) # [bs x forecast_len x nvars]
class PretrainHead(nn.Module):
def __init__(self, d_model, patch_len, dropout):
super().__init__()
self.dropout = nn.Dropout(dropout)
self.linear = nn.Linear(d_model, patch_len)
def forward(self, x):
"""
x: tensor [bs x nvars x d_model x num_patch]
output: tensor [bs x nvars x num_patch x patch_len]
"""
x = x.transpose(2, 3) # [bs x nvars x num_patch x d_model]
x = self.linear(self.dropout(x)) # [bs x nvars x num_patch x patch_len]
x = x.permute(0, 2, 1, 3) # [bs x num_patch x nvars x patch_len]
return x
class PatchTSTEncoder(nn.Module):
def __init__(self, c_in, num_patch, patch_len,
n_layers=3, d_model=128, n_heads=16, shared_embedding=True,
d_ff=256, norm='BatchNorm', attn_dropout=0., dropout=0., act="gelu", store_attn=False,
res_attention=True, pre_norm=False,
pe='zeros', learn_pe=True, verbose=False, **kwargs):
super().__init__()
self.n_vars = c_in
self.num_patch = num_patch
self.patch_len = patch_len
self.d_model = d_model
self.shared_embedding = shared_embedding
# Input encoding: projection of feature vectors onto a d-dim vector space
if not shared_embedding:
self.W_P = nn.ModuleList()
for _ in range(self.n_vars): self.W_P.append(nn.Linear(patch_len, d_model))
else:
self.W_P = nn.Linear(patch_len, d_model)
# Positional encoding
self.W_pos = positional_encoding(pe, learn_pe, num_patch, d_model)
# Residual dropout
self.dropout = nn.Dropout(dropout)
# Encoder
self.encoder = TSTEncoder(d_model, n_heads, d_ff=d_ff, norm=norm, attn_dropout=attn_dropout, dropout=dropout,
pre_norm=pre_norm, activation=act, res_attention=res_attention, n_layers=n_layers,
store_attn=store_attn)
def forward(self, x) -> Tensor:
"""
x: tensor [bs x num_patch x nvars x patch_len]
"""
bs, num_patch, n_vars, patch_len = x.shape
# Input encoding
if not self.shared_embedding:
x_out = []
for i in range(n_vars):
z = self.W_P[i](x[:, :, i, :])
x_out.append(z)
x = torch.stack(x_out, dim=2)
else:
x = self.W_P(x) # x: [bs x num_patch x nvars x d_model]
x = x.transpose(1, 2) # x: [bs x nvars x num_patch x d_model]
u = torch.reshape(x, (bs * n_vars, num_patch, self.d_model)) # u: [bs * nvars x num_patch x d_model]
u = self.dropout(u + self.W_pos) # u: [bs * nvars x num_patch x d_model]
# print("before trans u.shape = ", u.shape)
# Encoder
z = self.encoder(u) # z: [bs * nvars x num_patch x d_model]
# print("end trans u.shape = ", z.shape)
z = torch.reshape(z, (-1, n_vars, num_patch, self.d_model)) # z: [bs x nvars x num_patch x d_model]
z = z.permute(0, 1, 3, 2) # z: [bs x nvars x d_model x num_patch]
return z
# Cell
class TSTEncoder(nn.Module):
def __init__(self, d_model, n_heads, d_ff=None,
norm='BatchNorm', attn_dropout=0., dropout=0., activation='gelu',
res_attention=False, n_layers=1, pre_norm=False, store_attn=False):
super().__init__()
self.layers = nn.ModuleList([TSTEncoderLayer(d_model, n_heads=n_heads, d_ff=d_ff, norm=norm,
attn_dropout=attn_dropout, dropout=dropout,
activation=activation, res_attention=res_attention,
pre_norm=pre_norm, store_attn=store_attn) for i in
range(n_layers)])
self.res_attention = res_attention
def forward(self, src: Tensor):
"""
src: tensor [bs x q_len x d_model]
"""
output = src
scores = None
if self.res_attention:
for mod in self.layers: output, scores = mod(output, prev=scores)
return output
else:
for mod in self.layers: output = mod(output)
return output
class TSTEncoderLayer(nn.Module):
def __init__(self, d_model, n_heads, d_ff=256, store_attn=False,
norm='BatchNorm', attn_dropout=0, dropout=0., bias=True,
activation="gelu", res_attention=False, pre_norm=False):
super().__init__()
assert not d_model % n_heads, f"d_model ({d_model}) must be divisible by n_heads ({n_heads})"
d_k = d_model // n_heads
d_v = d_model // n_heads
# Multi-Head attention
self.res_attention = res_attention
self.self_attn = MultiheadAttention(d_model, n_heads, d_k, d_v, attn_dropout=attn_dropout, proj_dropout=dropout,
res_attention=res_attention)
# Add & Norm
self.dropout_attn = nn.Dropout(dropout)
if "batch" in norm.lower():
self.norm_attn = nn.Sequential(Transpose(1, 2), nn.BatchNorm1d(d_model), Transpose(1, 2))
else:
self.norm_attn = nn.LayerNorm(d_model)
# Position-wise Feed-Forward
self.ff = nn.Sequential(nn.Linear(d_model, d_ff, bias=bias),
get_activation_fn(activation),
nn.Dropout(dropout),
nn.Linear(d_ff, d_model, bias=bias))
# Add & Norm
self.dropout_ffn = nn.Dropout(dropout)
if "batch" in norm.lower():
self.norm_ffn = nn.Sequential(Transpose(1, 2), nn.BatchNorm1d(d_model), Transpose(1, 2))
else:
self.norm_ffn = nn.LayerNorm(d_model)
self.pre_norm = pre_norm
self.store_attn = store_attn
def forward(self, src: Tensor, prev: Optional[Tensor] = None):
"""
src: tensor [bs x q_len x d_model]
"""
# Multi-Head attention sublayer
if self.pre_norm:
src = self.norm_attn(src)
## Multi-Head attention
if self.res_attention:
src2, attn, scores = self.self_attn(src, src, src, prev)
else:
src2, attn = self.self_attn(src, src, src)
if self.store_attn:
self.attn = attn
## Add & Norm
src = src + self.dropout_attn(src2) # Add: residual connection with residual dropout
if not self.pre_norm:
src = self.norm_attn(src)
# Feed-forward sublayer
if self.pre_norm:
src = self.norm_ffn(src)
## Position-wise Feed-Forward
src2 = self.ff(src)
## Add & Norm
src = src + self.dropout_ffn(src2) # Add: residual connection with residual dropout
if not self.pre_norm:
src = self.norm_ffn(src)
if self.res_attention:
return src, scores
else:
return src
================================================
FILE: ts_classification_methods/patchtst/models/pos_encoding.py
================================================
__all__ = ['PositionalEncoding', 'SinCosPosEncoding', 'positional_encoding']
# Cell
import torch
from torch import nn
import math
# Cell
def PositionalEncoding(q_len, d_model, normalize=True):
pe = torch.zeros(q_len, d_model)
position = torch.arange(0, q_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
if normalize:
pe = pe - pe.mean()
pe = pe / (pe.std() * 10)
return pe
SinCosPosEncoding = PositionalEncoding
def positional_encoding(pe, learn_pe, q_len, d_model):
# Positional encoding
if pe == None:
W_pos = torch.empty((q_len, d_model)) # pe = None and learn_pe = False can be used to measure impact of pe
nn.init.uniform_(W_pos, -0.02, 0.02)
learn_pe = False
elif pe == 'zero':
W_pos = torch.empty((q_len, 1))
nn.init.uniform_(W_pos, -0.02, 0.02)
elif pe == 'zeros':
W_pos = torch.empty((q_len, d_model))
nn.init.uniform_(W_pos, -0.02, 0.02)
elif pe == 'normal' or pe == 'gauss':
W_pos = torch.zeros((q_len, 1))
torch.nn.init.normal_(W_pos, mean=0.0, std=0.1)
elif pe == 'uniform':
W_pos = torch.zeros((q_len, 1))
nn.init.uniform_(W_pos, a=0.0, b=0.1)
elif pe == 'sincos': W_pos = PositionalEncoding(q_len, d_model, normalize=True)
else: raise ValueError(f"{pe} is not a valid pe (positional encoder. Available types: 'gauss'=='normal', \
'zeros', 'zero', uniform', 'sincos', None.)")
return nn.Parameter(W_pos, requires_grad=learn_pe)
================================================
FILE: ts_classification_methods/patchtst/models/revin.py
================================================
import torch
from torch import nn
class RevIN(nn.Module):
def __init__(self, num_features: int, eps=1e-5, affine=True):
"""
:param num_features: the number of features or channels
:param eps: a value added for numerical stability
:param affine: if True, RevIN has learnable affine parameters
"""
super(RevIN, self).__init__()
self.num_features = num_features
self.eps = eps
self.affine = affine
if self.affine:
self._init_params()
def forward(self, x, mode:str):
if mode == 'norm':
self._get_statistics(x)
x = self._normalize(x)
elif mode == 'denorm':
x = self._denormalize(x)
else: raise NotImplementedError
return x
def _init_params(self):
# initialize RevIN params: (C,)
self.affine_weight = nn.Parameter(torch.ones(self.num_features))
self.affine_bias = nn.Parameter(torch.zeros(self.num_features))
def _get_statistics(self, x):
dim2reduce = tuple(range(1, x.ndim-1))
self.mean = torch.mean(x, dim=dim2reduce, keepdim=True).detach()
self.stdev = torch.sqrt(torch.var(x, dim=dim2reduce, keepdim=True, unbiased=False) + self.eps).detach()
def _normalize(self, x):
x = x - self.mean
x = x / self.stdev
if self.affine:
x = x * self.affine_weight
x = x + self.affine_bias
return x
def _denormalize(self, x):
if self.affine:
x = x - self.affine_bias
x = x / (self.affine_weight + self.eps*self.eps)
x = x * self.stdev
x = x + self.mean
return x
================================================
FILE: ts_classification_methods/patchtst/patch_mask.py
================================================
from torch import nn
import torch
DTYPE = torch.float32
class GetAttr:
"Inherit from this to have all attr accesses in `self._xtra` passed down to `self.default`"
_default = 'default'
def _component_attr_filter(self, k):
if k.startswith('__') or k in ('_xtra', self._default): return False
xtra = getattr(self, '_xtra', None)
return xtra is None or k in xtra
def _dir(self):
return [k for k in dir(getattr(self, self._default)) if self._component_attr_filter(k)]
def __getattr__(self, k):
if self._component_attr_filter(k):
attr = getattr(self, self._default, None)
if attr is not None: return getattr(attr, k)
# raise AttributeError(k)
def __dir__(self):
return custom_dir(self, self._dir())
# def __getstate__(self): return self.__dict__
def __setstate__(self, data):
self.__dict__.update(data)
def get_device(use_cuda=True, device_id=None, usage=5):
"Return or set default device; `use_cuda`: None - CUDA if available; True - error if not available; False - CPU"
if not torch.cuda.is_available():
use_cuda = False
else:
if device_id is None:
device_ids = get_available_cuda(usage=usage)
device_id = device_ids[0] # get the first available device
torch.cuda.set_device(device_id)
return torch.device(torch.cuda.current_device()) if use_cuda else torch.device('cpu')
def set_device(usage=5):
"set the device that has usage < default usage "
device_ids = get_available_cuda(usage=usage)
torch.cuda.set_device(device_ids[0]) # get the first available device
def default_device(use_cuda=True):
"Return or set default device; `use_cuda`: None - CUDA if available; True - error if not available; False - CPU"
if not torch.cuda.is_available():
use_cuda = False
return torch.device(torch.cuda.current_device()) if use_cuda else torch.device('cpu')
def get_available_cuda(usage=10):
if not torch.cuda.is_available(): return
# collect available cuda devices, only collect devices that has less that 'usage' percent
device_ids = []
for device in range(torch.cuda.device_count()):
if torch.cuda.utilization(device) < usage: device_ids.append(device)
return device_ids
def to_device(b, device=None, non_blocking=False):
"""
Recursively put `b` on `device`
components of b are torch tensors
"""
if device is None:
device = default_device(use_cuda=True)
if isinstance(b, dict):
return {key: to_device(val, device) for key, val in b.items()}
if isinstance(b, (list, tuple)):
return type(b)(to_device(o, device) for o in b)
return b.to(device, non_blocking=non_blocking)
def to_numpy(b):
"""
Components of b are torch tensors
"""
if isinstance(b, dict):
return {key: to_numpy(val) for key, val in b.items()}
if isinstance(b, (list, tuple)):
return type(b)(to_numpy(o) for o in b)
return b.detach().cpu().numpy()
class Callback(GetAttr):
_default = 'learner'
class SetupLearnerCB(Callback):
def __init__(self):
self.device = default_device(use_cuda=True)
def before_batch_train(self):
self._to_device()
def before_batch_valid(self):
self._to_device()
def before_batch_predict(self):
self._to_device()
def before_batch_test(self):
self._to_device()
def _to_device(self):
batch = to_device(self.batch, self.device)
if self.n_inp > 1:
xb, yb = batch
else:
xb, yb = batch, None
self.learner.batch = xb, yb
def before_fit(self):
"Set model to cuda before training"
self.learner.model.to(self.device)
self.learner.device = self.device
class GetPredictionsCB(Callback):
def __init__(self):
super().__init__()
def before_predict(self):
self.preds = []
def after_batch_predict(self):
# append the prediction after each forward batch
self.preds.append(self.pred)
def after_predict(self):
self.preds = torch.concat(self.preds) # .detach().cpu().numpy()
class GetTestCB(Callback):
def __init__(self):
super().__init__()
def before_test(self):
self.preds, self.targets = [], []
def after_batch_test(self):
# append the prediction after each forward batch
self.preds.append(self.pred)
self.targets.append(self.yb)
def after_test(self):
self.preds = torch.concat(self.preds) # .detach().cpu().numpy()
self.targets = torch.concat(self.targets) # .detach().cpu().numpy()
# Cell
class PatchCB(Callback):
def __init__(self, patch_len, stride):
"""
Callback used to perform patching on the batch input data
Args:
patch_len: patch length
stride: stride
"""
self.patch_len = patch_len
self.stride = stride
def before_forward(self): self.set_patch()
def set_patch(self):
"""
take xb from learner and convert to patch: [bs x seq_len x n_vars] -> [bs x num_patch x n_vars x patch_len]
"""
xb_patch, num_patch = create_patch(self.xb, self.patch_len, self.stride) # xb: [bs x seq_len x n_vars]
# learner get the transformed input
self.learner.xb = xb_patch # xb_patch: [bs x num_patch x n_vars x patch_len]
class PatchMaskCB(Callback):
def __init__(self, patch_len, stride, mask_ratio,
mask_when_pred: bool = False):
"""
Callback used to perform the pretext task of reconstruct the original data after a binary mask has been applied.
Args:
patch_len: patch length
stride: stride
mask_ratio: mask ratio
"""
self.patch_len = patch_len
self.stride = stride
self.mask_ratio = mask_ratio
def before_fit(self):
# overwrite the predefined loss function
self.learner.loss_func = self._loss
device = self.learner.device
def before_forward(self): self.patch_masking()
def patch_masking(self):
"""
xb: [bs x seq_len x n_vars] -> [bs x num_patch x n_vars x patch_len]
"""
xb_patch, num_patch = create_patch(self.xb, self.patch_len,
self.stride) # xb_patch: [bs x num_patch x n_vars x patch_len]
xb_mask, _, self.mask, _ = random_masking(xb_patch,
self.mask_ratio) # xb_mask: [bs x num_patch x n_vars x patch_len]
self.mask = self.mask.bool() # mask: [bs x num_patch x n_vars]
self.learner.xb = xb_mask # learner.xb: masked 4D tensor
self.learner.yb = xb_patch # learner.yb: non-masked 4d tensor
def _loss(self, preds, target):
"""
preds: [bs x num_patch x n_vars x patch_len]
targets: [bs x num_patch x n_vars x patch_len]
"""
loss = (preds - target) ** 2
loss = loss.mean(dim=-1)
loss = (loss * self.mask).sum() / self.mask.sum()
return loss
def create_patch(xb, patch_len, stride):
"""
xb: [bs x seq_len x n_vars]
"""
seq_len = xb.shape[1]
num_patch = (max(seq_len, patch_len) - patch_len) // stride + 1
tgt_len = patch_len + stride * (num_patch - 1)
s_begin = seq_len - tgt_len
xb = xb[:, s_begin:, :] # xb: [bs x tgt_len x nvars]
xb = xb.unfold(dimension=1, size=patch_len, step=stride) # xb: [bs x num_patch x n_vars x patch_len]
return xb, num_patch
class Patch(nn.Module):
def __init__(self, seq_len, patch_len, stride):
super().__init__()
self.seq_len = seq_len
self.patch_len = patch_len
self.stride = stride
self.num_patch = (max(seq_len, patch_len) - patch_len) // stride + 1
tgt_len = patch_len + stride * (self.num_patch - 1)
self.s_begin = seq_len - tgt_len
def forward(self, x):
"""
x: [bs x seq_len x n_vars]
"""
x = x[:, self.s_begin:, :]
x = x.unfold(dimension=1, size=self.patch_len, step=self.stride) # xb: [bs x num_patch x n_vars x patch_len]
return x
def random_masking(xb, mask_ratio):
# xb: [bs x num_patch x n_vars x patch_len]
bs, L, nvars, D = xb.shape
x = xb.clone()
len_keep = int(L * (1 - mask_ratio))
noise = torch.rand(bs, L, nvars, device=xb.device) # noise in [0, 1], bs x L x nvars
# sort noise for each sample
ids_shuffle = torch.argsort(noise, dim=1) # ascend: small is keep, large is remove
ids_restore = torch.argsort(ids_shuffle, dim=1) # ids_restore: [bs x L x nvars]
# keep the first subset
ids_keep = ids_shuffle[:, :len_keep, :] # ids_keep: [bs x len_keep x nvars]
x_kept = torch.gather(x, dim=1, index=ids_keep.unsqueeze(-1).repeat(1, 1, 1,
D)) # x_kept: [bs x len_keep x nvars x patch_len]
# removed x
x_removed = torch.zeros(bs, L - len_keep, nvars, D,
device=xb.device) # x_removed: [bs x (L-len_keep) x nvars x patch_len]
x_ = torch.cat([x_kept, x_removed], dim=1) # x_: [bs x L x nvars x patch_len]
# combine the kept part and the removed one
x_masked = torch.gather(x_, dim=1, index=ids_restore.unsqueeze(-1).repeat(1, 1, 1,
D)) # x_masked: [bs x num_patch x nvars x patch_len]
# generate the binary mask: 0 is keep, 1 is remove
mask = torch.ones([bs, L, nvars], device=x.device) # mask: [bs x num_patch x nvars]
mask[:, :len_keep, :] = 0
# unshuffle to get the binary mask
mask = torch.gather(mask, dim=1, index=ids_restore) # [bs x num_patch x nvars]
return x_masked, x_kept, mask, ids_restore
def random_masking_3D(xb, mask_ratio):
# xb: [bs x num_patch x dim]
bs, L, D = xb.shape
x = xb.clone()
len_keep = int(L * (1 - mask_ratio))
noise = torch.rand(bs, L, device=xb.device) # noise in [0, 1], bs x L
# sort noise for each sample
ids_shuffle = torch.argsort(noise, dim=1) # ascend: small is keep, large is remove
ids_restore = torch.argsort(ids_shuffle, dim=1) # ids_restore: [bs x L]
# keep the first subset
ids_keep = ids_shuffle[:, :len_keep] # ids_keep: [bs x len_keep]
x_kept = torch.gather(x, dim=1, index=ids_keep.unsqueeze(-1).repeat(1, 1, D)) # x_kept: [bs x len_keep x dim]
# removed x
x_removed = torch.zeros(bs, L - len_keep, D, device=xb.device) # x_removed: [bs x (L-len_keep) x dim]
x_ = torch.cat([x_kept, x_removed], dim=1) # x_: [bs x L x dim]
# combine the kept part and the removed one
x_masked = torch.gather(x_, dim=1,
index=ids_restore.unsqueeze(-1).repeat(1, 1, D)) # x_masked: [bs x num_patch x dim]
# generate the binary mask: 0 is keep, 1 is remove
mask = torch.ones([bs, L], device=x.device) # mask: [bs x num_patch]
mask[:, :len_keep] = 0
# unshuffle to get the binary mask
mask = torch.gather(mask, dim=1, index=ids_restore) # [bs x num_patch]
return x_masked, x_kept, mask, ids_restore
if __name__ == "__main__":
bs, L, nvars, D = 2, 20, 4, 5
xb = torch.randn(bs, L, nvars, D)
xb_mask, mask, ids_restore = create_mask(xb, mask_ratio=0.5)
breakpoint()
================================================
FILE: ts_classification_methods/patchtst/scripts/generator_patchtst.py
================================================
uea_all = ['ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions', 'CharacterTrajectories',
'Cricket', 'DuckDuckGeese', 'EigenWorms', 'Epilepsy', 'EthanolConcentration', 'ERing',
'FaceDetection', 'FingerMovements', 'HandMovementDirection', 'Handwriting',
'Heartbeat', 'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PenDigits', 'PEMS-SF', 'PhonemeSpectra', 'RacketSports', 'SelfRegulationSCP1',
'SelfRegulationSCP2', 'SpokenArabicDigits', 'StandWalkJump', 'UWaveGestureLibrary']
ucr_dataset = ['ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY', 'AllGestureWiimoteZ', 'ArrowHead', 'BME',
'Beef',
'BeetleFly', 'BirdChicken', 'CBF', 'Car', 'Chinatown', 'ChlorineConcentration', 'CinCECGTorso', 'Coffee',
'Computers',
'CricketX', 'CricketY', 'CricketZ', 'Crop', 'DiatomSizeReduction', 'DistalPhalanxOutlineAgeGroup',
'DistalPhalanxOutlineCorrect', 'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame', 'DodgerLoopWeekend',
'ECG200', 'ECG5000', 'ECGFiveDays', 'EOGHorizontalSignal', 'EOGVerticalSignal', 'Earthquakes',
'ElectricDevices',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords', 'Fish', 'FordA', 'FordB',
'FreezerRegularTrain',
'FreezerSmallTrain', 'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3', 'GesturePebbleZ1',
'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan', 'GunPointMaleVersusFemale', 'GunPointOldVersusYoung',
'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate', 'InsectEPGRegularTrain',
'InsectEPGSmallTrain',
'InsectWingbeatSound', 'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2', 'Lightning7',
'Mallat', 'Meat',
'MedicalImages', 'MelbournePedestrian', 'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain', 'MoteStrain',
'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OSULeaf', 'OliveOil', 'PLAID', 'PhalangesOutlinesCorrect', 'Phoneme',
'PickupGestureWiimoteZ', 'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'Plane', 'PowerCons',
'ProximalPhalanxOutlineAgeGroup', 'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices',
'Rock', 'ScreenType', 'SemgHandGenderCh2', 'SemgHandMovementCh2', 'SemgHandSubjectCh2',
'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace', 'SonyAIBORobotSurface1',
'SonyAIBORobotSurface2', 'StarLightCurves', 'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG', 'TwoPatterns', 'UMD',
'UWaveGestureLibraryAll',
'UWaveGestureLibraryX', 'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine', 'WordSynonyms',
'Worms',
'WormsTwoClass', 'Yoga']
code_main = 'main_gpt4ts_uea' ## main_patchtst_ucr main_gpt4ts_ucr mian_patchtst
_mu_all = [8] ## , 16, 24, 32, 48, 64
i = 1
for dataset in uea_all:
print("i = ", i, "dataset_name = ", dataset)
i = i + 1
save_csv_name = code_main + '_0702_' ## --len_k
with open('/dev_data/lz/time_series_label_noise/patchtst/scripts/patchtst_ucr.sh', 'a') as f:
f.write('python ' + code_main + '.py '
'--dataset ' + dataset
+ ' --epoch 1000 '+
'--save_csv_name ' + save_csv_name + ' --cuda cuda:1' + ';\n')
## nohup ./scripts/patchtst_uea.sh &
## nohup ./scripts/patchtst_ucr.sh &
================================================
FILE: ts_classification_methods/scripts/dilated_single_norm.sh
================================================
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ACSF1 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Adiac --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteX --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteY --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteZ --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ArrowHead --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BME --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Beef --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BeetleFly --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BirdChicken --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CBF --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Car --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Chinatown --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ChlorineConcentration --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CinCECGTorso --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Computers --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketX --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketY --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketZ --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Crop --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DiatomSizeReduction --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineCorrect --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxTW --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopDay --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopGame --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopWeekend --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG200 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG5000 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECGFiveDays --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGHorizontalSignal --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGVerticalSignal --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Earthquakes --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ElectricDevices --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EthanolLevel --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceAll --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceFour --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FacesUCR --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FiftyWords --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fish --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordA --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordB --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerRegularTrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerSmallTrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fungi --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD1 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD3 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ1 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPoint --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointAgeSpan --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointMaleVersusFemale --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointOldVersusYoung --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Ham --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HandOutlines --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Haptics --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Herring --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HouseTwenty --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InlineSkate --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGRegularTrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGSmallTrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectWingbeatSound --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ItalyPowerDemand --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset LargeKitchenAppliances --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning7 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Mallat --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Meat --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MedicalImages --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MelbournePedestrian --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineCorrect --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxTW --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesRegularTrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesSmallTrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MoteStrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax1 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OSULeaf --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OliveOil --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PLAID --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PhalangesOutlinesCorrect --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Phoneme --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PickupGestureWiimoteZ --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigAirwayPressure --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigArtPressure --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigCVP --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Plane --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PowerCons --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineCorrect --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxTW --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset RefrigerationDevices --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Rock --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ScreenType --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandGenderCh2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandMovementCh2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandSubjectCh2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShakeGestureWiimoteZ --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapeletSim --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapesAll --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmallKitchenAppliances --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmoothSubspace --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface1 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset StarLightCurves --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Strawberry --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SwedishLeaf --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Symbols --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SyntheticControl --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation1 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Trace --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoLeadECG --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoPatterns --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UMD --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryAll --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryX --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryY --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryZ --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wafer --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wine --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WordSynonyms --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Worms --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WormsTwoClass --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Yoga --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ACSF1 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Adiac --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteX --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteY --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteZ --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ArrowHead --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BME --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Beef --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BeetleFly --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BirdChicken --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CBF --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Car --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Chinatown --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ChlorineConcentration --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CinCECGTorso --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Computers --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketX --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketY --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketZ --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Crop --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DiatomSizeReduction --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineCorrect --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxTW --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopDay --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopGame --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopWeekend --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG200 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG5000 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECGFiveDays --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGHorizontalSignal --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGVerticalSignal --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Earthquakes --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ElectricDevices --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EthanolLevel --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceAll --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceFour --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FacesUCR --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FiftyWords --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fish --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordA --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordB --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerRegularTrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerSmallTrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fungi --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD1 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD3 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ1 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPoint --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointAgeSpan --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointMaleVersusFemale --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointOldVersusYoung --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Ham --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HandOutlines --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Haptics --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Herring --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HouseTwenty --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InlineSkate --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGRegularTrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGSmallTrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectWingbeatSound --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ItalyPowerDemand --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset LargeKitchenAppliances --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning7 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Mallat --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Meat --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MedicalImages --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MelbournePedestrian --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineCorrect --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxTW --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesRegularTrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesSmallTrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MoteStrain --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax1 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OSULeaf --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OliveOil --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PLAID --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PhalangesOutlinesCorrect --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Phoneme --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PickupGestureWiimoteZ --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigAirwayPressure --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigArtPressure --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigCVP --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Plane --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PowerCons --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineCorrect --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxTW --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset RefrigerationDevices --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Rock --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ScreenType --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandGenderCh2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandMovementCh2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandSubjectCh2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShakeGestureWiimoteZ --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapeletSim --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapesAll --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmallKitchenAppliances --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmoothSubspace --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface1 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset StarLightCurves --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Strawberry --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SwedishLeaf --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Symbols --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SyntheticControl --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation1 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation2 --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Trace --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoLeadECG --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoPatterns --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UMD --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryAll --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryX --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryY --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryZ --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wafer --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wine --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WordSynonyms --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Worms --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WormsTwoClass --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Yoga --mode directly_cls --epoch 1000 --depth 3 --loss cross_entropy --save_csv_name dilated3_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ACSF1 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Adiac --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteX --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteY --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteZ --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ArrowHead --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BME --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Beef --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BeetleFly --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BirdChicken --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CBF --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Car --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Chinatown --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ChlorineConcentration --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CinCECGTorso --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Computers --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketX --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketY --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketZ --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Crop --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DiatomSizeReduction --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineCorrect --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxTW --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopDay --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopGame --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopWeekend --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG200 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG5000 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECGFiveDays --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGHorizontalSignal --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGVerticalSignal --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Earthquakes --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ElectricDevices --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EthanolLevel --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceAll --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceFour --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FacesUCR --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FiftyWords --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fish --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordA --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordB --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerRegularTrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerSmallTrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fungi --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD1 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD3 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ1 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPoint --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointAgeSpan --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointMaleVersusFemale --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointOldVersusYoung --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Ham --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HandOutlines --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Haptics --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Herring --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HouseTwenty --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InlineSkate --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGRegularTrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGSmallTrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectWingbeatSound --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ItalyPowerDemand --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset LargeKitchenAppliances --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning7 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Mallat --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Meat --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MedicalImages --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MelbournePedestrian --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineCorrect --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxTW --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesRegularTrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesSmallTrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MoteStrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax1 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OSULeaf --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OliveOil --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PLAID --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PhalangesOutlinesCorrect --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Phoneme --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PickupGestureWiimoteZ --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigAirwayPressure --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigArtPressure --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigCVP --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Plane --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PowerCons --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineCorrect --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxTW --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset RefrigerationDevices --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Rock --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ScreenType --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandGenderCh2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandMovementCh2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandSubjectCh2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShakeGestureWiimoteZ --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapeletSim --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapesAll --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmallKitchenAppliances --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmoothSubspace --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface1 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset StarLightCurves --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Strawberry --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SwedishLeaf --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Symbols --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SyntheticControl --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation1 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Trace --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoLeadECG --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoPatterns --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UMD --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryAll --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryX --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryY --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryZ --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wafer --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wine --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WordSynonyms --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Worms --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WormsTwoClass --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier linear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Yoga --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ACSF1 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Adiac --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteX --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteY --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteZ --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ArrowHead --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BME --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Beef --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BeetleFly --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BirdChicken --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CBF --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Car --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Chinatown --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ChlorineConcentration --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CinCECGTorso --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Computers --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketX --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketY --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketZ --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Crop --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DiatomSizeReduction --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineCorrect --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxTW --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopDay --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopGame --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopWeekend --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG200 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG5000 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECGFiveDays --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGHorizontalSignal --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGVerticalSignal --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Earthquakes --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ElectricDevices --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EthanolLevel --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceAll --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceFour --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FacesUCR --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FiftyWords --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fish --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordA --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordB --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerRegularTrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerSmallTrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fungi --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD1 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD3 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ1 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPoint --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointAgeSpan --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointMaleVersusFemale --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointOldVersusYoung --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Ham --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HandOutlines --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Haptics --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Herring --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HouseTwenty --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InlineSkate --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGRegularTrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGSmallTrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectWingbeatSound --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ItalyPowerDemand --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset LargeKitchenAppliances --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning7 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Mallat --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Meat --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MedicalImages --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MelbournePedestrian --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineCorrect --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxTW --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesRegularTrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesSmallTrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MoteStrain --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax1 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OSULeaf --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OliveOil --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PLAID --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PhalangesOutlinesCorrect --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Phoneme --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PickupGestureWiimoteZ --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigAirwayPressure --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigArtPressure --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigCVP --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Plane --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PowerCons --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineCorrect --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxTW --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset RefrigerationDevices --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Rock --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ScreenType --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandGenderCh2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandMovementCh2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandSubjectCh2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShakeGestureWiimoteZ --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapeletSim --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapesAll --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmallKitchenAppliances --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmoothSubspace --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface1 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset StarLightCurves --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Strawberry --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SwedishLeaf --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Symbols --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SyntheticControl --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation1 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation2 --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Trace --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoLeadECG --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoPatterns --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UMD --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryAll --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryX --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryY --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryZ --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wafer --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wine --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WordSynonyms --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Worms --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WormsTwoClass --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
python train.py --backbone dilated --classifier nonlinear --classifier_input 320 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Yoga --mode directly_cls --epoch 1000 --depth 10 --loss cross_entropy --save_csv_name dilated10_nonlin_single_norm_0409_ --cuda cuda:0;
================================================
FILE: ts_classification_methods/scripts/fcn_lin_set_norm.sh
================================================
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ACSF1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Adiac --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset AllGestureWiimoteX --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset AllGestureWiimoteY --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset AllGestureWiimoteZ --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ArrowHead --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset BME --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Beef --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset BeetleFly --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset BirdChicken --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CBF --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Car --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Chinatown --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ChlorineConcentration --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CinCECGTorso --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Coffee --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Computers --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CricketX --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CricketY --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CricketZ --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Crop --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DiatomSizeReduction --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DistalPhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DistalPhalanxOutlineCorrect --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DistalPhalanxTW --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DodgerLoopDay --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DodgerLoopGame --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DodgerLoopWeekend --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ECG200 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ECG5000 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ECGFiveDays --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset EOGHorizontalSignal --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset EOGVerticalSignal --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Earthquakes --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ElectricDevices --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset EthanolLevel --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FaceAll --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FaceFour --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FacesUCR --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FiftyWords --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Fish --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FordA --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FordB --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FreezerRegularTrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FreezerSmallTrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Fungi --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GestureMidAirD1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GestureMidAirD2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GestureMidAirD3 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GesturePebbleZ1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GesturePebbleZ2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GunPoint --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GunPointAgeSpan --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GunPointMaleVersusFemale --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GunPointOldVersusYoung --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Ham --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset HandOutlines --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Haptics --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Herring --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset HouseTwenty --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset InlineSkate --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset InsectEPGRegularTrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset InsectEPGSmallTrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset InsectWingbeatSound --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ItalyPowerDemand --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset LargeKitchenAppliances --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Lightning2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Lightning7 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Mallat --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Meat --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MedicalImages --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MelbournePedestrian --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MiddlePhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MiddlePhalanxOutlineCorrect --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MiddlePhalanxTW --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MixedShapesRegularTrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MixedShapesSmallTrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MoteStrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset NonInvasiveFetalECGThorax1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset NonInvasiveFetalECGThorax2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset OSULeaf --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset OliveOil --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PLAID --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PhalangesOutlinesCorrect --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Phoneme --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PickupGestureWiimoteZ --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PigAirwayPressure --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PigArtPressure --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PigCVP --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Plane --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PowerCons --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ProximalPhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ProximalPhalanxOutlineCorrect --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ProximalPhalanxTW --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset RefrigerationDevices --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Rock --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ScreenType --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SemgHandGenderCh2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SemgHandMovementCh2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SemgHandSubjectCh2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ShakeGestureWiimoteZ --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ShapeletSim --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ShapesAll --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SmallKitchenAppliances --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SmoothSubspace --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SonyAIBORobotSurface1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SonyAIBORobotSurface2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset StarLightCurves --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Strawberry --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SwedishLeaf --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Symbols --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SyntheticControl --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ToeSegmentation1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ToeSegmentation2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Trace --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset TwoLeadECG --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset TwoPatterns --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UMD --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UWaveGestureLibraryAll --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UWaveGestureLibraryX --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UWaveGestureLibraryY --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UWaveGestureLibraryZ --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Wafer --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Wine --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset WordSynonyms --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Worms --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset WormsTwoClass --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Yoga --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1;
================================================
FILE: ts_classification_methods/scripts/fcn_lin_single_norm.sh
================================================
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ACSF1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Adiac --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteX --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteY --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteZ --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ArrowHead --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BME --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Beef --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BeetleFly --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BirdChicken --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CBF --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Car --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Chinatown --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ChlorineConcentration --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CinCECGTorso --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Computers --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketX --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketY --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketZ --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Crop --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DiatomSizeReduction --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineCorrect --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxTW --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopDay --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopGame --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopWeekend --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG200 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG5000 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECGFiveDays --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGHorizontalSignal --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGVerticalSignal --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Earthquakes --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ElectricDevices --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EthanolLevel --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceAll --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceFour --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FacesUCR --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FiftyWords --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fish --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordA --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordB --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerRegularTrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerSmallTrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fungi --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD3 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPoint --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointAgeSpan --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointMaleVersusFemale --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointOldVersusYoung --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Ham --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HandOutlines --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Haptics --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Herring --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HouseTwenty --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InlineSkate --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGRegularTrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGSmallTrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectWingbeatSound --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ItalyPowerDemand --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset LargeKitchenAppliances --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning7 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Mallat --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Meat --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MedicalImages --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MelbournePedestrian --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineCorrect --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxTW --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesRegularTrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesSmallTrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MoteStrain --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OSULeaf --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OliveOil --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PLAID --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PhalangesOutlinesCorrect --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Phoneme --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PickupGestureWiimoteZ --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigAirwayPressure --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigArtPressure --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigCVP --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Plane --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PowerCons --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineAgeGroup --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineCorrect --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxTW --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset RefrigerationDevices --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Rock --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ScreenType --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandGenderCh2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandMovementCh2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandSubjectCh2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShakeGestureWiimoteZ --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapeletSim --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapesAll --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmallKitchenAppliances --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmoothSubspace --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset StarLightCurves --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Strawberry --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SwedishLeaf --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Symbols --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SyntheticControl --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation2 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Trace --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoLeadECG --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoPatterns --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UMD --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryAll --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryX --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryY --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryZ --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wafer --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wine --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WordSynonyms --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Worms --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WormsTwoClass --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Yoga --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1;
================================================
FILE: ts_classification_methods/scripts/generator_dilated.py
================================================
ucr_dataset = ['ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY', 'AllGestureWiimoteZ', 'ArrowHead', 'BME',
'Beef',
'BeetleFly', 'BirdChicken', 'CBF', 'Car', 'Chinatown', 'ChlorineConcentration', 'CinCECGTorso', 'Coffee',
'Computers',
'CricketX', 'CricketY', 'CricketZ', 'Crop', 'DiatomSizeReduction', 'DistalPhalanxOutlineAgeGroup',
'DistalPhalanxOutlineCorrect', 'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame', 'DodgerLoopWeekend',
'ECG200', 'ECG5000', 'ECGFiveDays', 'EOGHorizontalSignal', 'EOGVerticalSignal', 'Earthquakes',
'ElectricDevices',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords', 'Fish', 'FordA', 'FordB',
'FreezerRegularTrain',
'FreezerSmallTrain', 'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3', 'GesturePebbleZ1',
'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan', 'GunPointMaleVersusFemale', 'GunPointOldVersusYoung',
'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate', 'InsectEPGRegularTrain',
'InsectEPGSmallTrain',
'InsectWingbeatSound', 'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2', 'Lightning7',
'Mallat', 'Meat',
'MedicalImages', 'MelbournePedestrian', 'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain', 'MoteStrain',
'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OSULeaf', 'OliveOil', 'PLAID', 'PhalangesOutlinesCorrect', 'Phoneme',
'PickupGestureWiimoteZ', 'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'Plane', 'PowerCons',
'ProximalPhalanxOutlineAgeGroup', 'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices',
'Rock', 'ScreenType', 'SemgHandGenderCh2', 'SemgHandMovementCh2', 'SemgHandSubjectCh2',
'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace', 'SonyAIBORobotSurface1',
'SonyAIBORobotSurface2', 'StarLightCurves', 'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG', 'TwoPatterns', 'UMD',
'UWaveGestureLibraryAll',
'UWaveGestureLibraryX', 'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine', 'WordSynonyms',
'Worms',
'WormsTwoClass', 'Yoga']
i = 0
for dataset in ucr_dataset:
print("i = ", i, "dataset_name = ", dataset)
i = i + 1
'''
python train.py --backbone dilated --classifier linear --classifier_input 320 --depth 3 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ACSF1 --mode directly_cls --epoch 10 --loss cross_entropy --save_csv_name test_nonlin_set_norm_0409_ --cuda cuda:1
'''
with open('/SSD/lz/time_tsm/scripts/dilated_single_norm.sh', 'a') as f:
f.write('python train.py --backbone dilated --classifier linear --classifier_input 320 '
'--dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
'--dataset ' + dataset
+ ' --mode directly_cls --epoch 1000 --depth 3 ' +
' --loss cross_entropy --save_csv_name dilated3_lin_single_norm_0409_ --cuda cuda:0' + ';\n')
i = 0
for dataset in ucr_dataset:
print("i = ", i, "dataset_name = ", dataset)
i = i + 1
'''
python train.py --backbone dilated --classifier linear --classifier_input 320 --depth 10 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ACSF1 --mode directly_cls --epoch 10 --loss cross_entropy --save_csv_name test_nonlin_set_norm_0409_ --cuda cuda:1
'''
with open('/SSD/lz/time_tsm/scripts/dilated_single_norm.sh', 'a') as f:
f.write('python train.py --backbone dilated --classifier linear --classifier_input 320 '
'--dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
'--dataset ' + dataset
+ ' --mode directly_cls --epoch 1000 --depth 10 ' +
' --loss cross_entropy --save_csv_name dilated10_lin_single_norm_0409_ --cuda cuda:0' + ';\n')
## nohup ./scripts/dilated_single_norm.sh &
================================================
FILE: ts_classification_methods/scripts/generator_fcn.py
================================================
ucr_dataset = ['ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY', 'AllGestureWiimoteZ', 'ArrowHead', 'BME',
'Beef',
'BeetleFly', 'BirdChicken', 'CBF', 'Car', 'Chinatown', 'ChlorineConcentration', 'CinCECGTorso', 'Coffee',
'Computers',
'CricketX', 'CricketY', 'CricketZ', 'Crop', 'DiatomSizeReduction', 'DistalPhalanxOutlineAgeGroup',
'DistalPhalanxOutlineCorrect', 'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame', 'DodgerLoopWeekend',
'ECG200', 'ECG5000', 'ECGFiveDays', 'EOGHorizontalSignal', 'EOGVerticalSignal', 'Earthquakes',
'ElectricDevices',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords', 'Fish', 'FordA', 'FordB',
'FreezerRegularTrain',
'FreezerSmallTrain', 'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3', 'GesturePebbleZ1',
'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan', 'GunPointMaleVersusFemale', 'GunPointOldVersusYoung',
'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate', 'InsectEPGRegularTrain',
'InsectEPGSmallTrain',
'InsectWingbeatSound', 'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2', 'Lightning7',
'Mallat', 'Meat',
'MedicalImages', 'MelbournePedestrian', 'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain', 'MoteStrain',
'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OSULeaf', 'OliveOil', 'PLAID', 'PhalangesOutlinesCorrect', 'Phoneme',
'PickupGestureWiimoteZ', 'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'Plane', 'PowerCons',
'ProximalPhalanxOutlineAgeGroup', 'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices',
'Rock', 'ScreenType', 'SemgHandGenderCh2', 'SemgHandMovementCh2', 'SemgHandSubjectCh2',
'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace', 'SonyAIBORobotSurface1',
'SonyAIBORobotSurface2', 'StarLightCurves', 'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG', 'TwoPatterns', 'UMD',
'UWaveGestureLibraryAll',
'UWaveGestureLibraryX', 'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine', 'WordSynonyms',
'Worms',
'WormsTwoClass', 'Yoga']
i = 0
for dataset in ucr_dataset:
print("i = ", i, "dataset_name = ", dataset)
i = i + 1
# '''
# python train.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ACSF1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_nonlin_set_norm_0404_ --cuda cuda:1
# '''
# with open('/SSD/lz/time_tsm/scripts/fcn_nonlin_set_norm.sh', 'a') as f:
# f.write('python train.py --backbone fcn --classifier nonlinear --classifier_input 128 '
# '--dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set '
# '--dataset ' + dataset
# + ' --mode directly_cls --epoch 1000 ' +
# ' --loss cross_entropy --save_csv_name fcn_nonlin_set_norm_0404_ --cuda cuda:1' + ';\n')
#
# '''
# python train.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_nonlin_single_norm_0404_ --cuda cuda:1;
# '''
# with open('/SSD/lz/time_tsm/scripts/fcn_nonlin_single_norm.sh', 'a') as f:
# f.write('python train.py --backbone fcn --classifier nonlinear --classifier_input 128 '
# '--dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
# '--dataset ' + dataset
# + ' --mode directly_cls --epoch 1000 ' +
# ' --loss cross_entropy --save_csv_name fcn_nonlin_single_norm_0404_ --cuda cuda:1' + ';\n')
'''
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ACSF1 --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_nonlin_set_norm_0404_ --cuda cuda:1
'''
# with open('/SSD/lz/time_tsm/scripts/fcn_lin_set_norm.sh', 'a') as f:
# f.write('python train.py --backbone fcn --classifier linear --classifier_input 128 '
# '--dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set '
# '--dataset ' + dataset
# + ' --mode directly_cls --epoch 1000 ' +
# ' --loss cross_entropy --save_csv_name fcn_lin_set_norm_0407_ --cuda cuda:1' + ';\n')
'''
python train.py --backbone fcn --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode directly_cls --epoch 1000 --loss cross_entropy --save_csv_name fcn_nonlin_single_norm_0404_ --cuda cuda:1;
'''
with open('/SSD/lz/time_tsm/scripts/fcn_lin_single_norm.sh', 'a') as f:
f.write('python train.py --backbone fcn --classifier linear --classifier_input 128 '
'--dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
'--dataset ' + dataset
+ ' --mode directly_cls --epoch 1000 ' +
' --loss cross_entropy --save_csv_name fcn_lin_single_norm_0311_ --cuda cuda:1' + ';\n')
## nohup ./scripts/fcn_lin_set_norm.sh &
## nohup ./scripts/fcn_lin_single_norm.sh &
================================================
FILE: ts_classification_methods/scripts/generator_pretrain_cls.py
================================================
source_datasets = ['Crop', 'ElectricDevices', 'StarLightCurves', 'Wafer', 'ECG5000', 'TwoPatterns', 'FordA',
'UWaveGestureLibraryAll', 'UWaveGestureLibraryX', 'UWaveGestureLibraryY', 'UWaveGestureLibraryZ',
'FordB', 'ChlorineConcentration', 'NonInvasiveFetalECGThorax1', 'NonInvasiveFetalECGThorax2']
target_min_datasets = ['BirdChicken', 'BeetleFly', 'Coffee', 'OliveOil', 'Beef', 'Rock', 'ShakeGestureWiimoteZ',
'PickupGestureWiimoteZ', 'Wine', 'FaceFour', 'Meat', 'Car', 'Lightning2', 'Herring',
'Lightning7']
target_med_datasets = ['Earthquakes', 'Haptics', 'Computers', 'DistalPhalanxTW', 'DistalPhalanxOutlineAgeGroup',
'MiddlePhalanxTW', 'MiddlePhalanxOutlineAgeGroup',
'SyntheticControl', 'ProximalPhalanxTW', 'ProximalPhalanxOutlineAgeGroup',
'SonyAIBORobotSurface1', 'InlineSkate', 'EOGVerticalSignal', 'EOGHorizontalSignal',
'SmallKitchenAppliances']
target_max_datasets = ['MoteStrain', 'HandOutlines', 'CinCECGTorso', 'Phoneme', 'InsectWingbeatSound', 'FacesUCR',
'FaceAll',
'Mallat', 'MixedShapesSmallTrain', 'PhalangesOutlinesCorrect', 'FreezerSmallTrain',
'MixedShapesRegularTrain', 'FreezerRegularTrain', 'Yoga', 'MelbournePedestrian']
target_datasets = target_min_datasets + target_med_datasets + target_max_datasets
print(target_datasets)
print(len(source_datasets), len(target_datasets))
i = 0
for dataset in source_datasets: ## cls pretrain
print("i = ", i, "dataset_name = ", dataset)
i = i + 1
'''
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode pretrain --epoch 20 --loss cross_entropy --cuda cuda:1;
'''
with open('/SSD/lz/time_tsm/scripts/transfer_pretrain_finetune.sh', 'a') as f:
f.write(
'python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
'--dataroot /SSD/lz/UCRArchive_2018 '
'--dataset ' + dataset
+ ' --mode pretrain --epoch 2000 --classifier linear' +
' --loss cross_entropy --cuda cuda:1' + ';\n')
i = 0
for dataset in source_datasets: ## rec fcn pretrain
print("i = ", i, "dataset_name = ", dataset)
i = i + 1
'''
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode pretrain --epoch 20 --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
'''
with open('/SSD/lz/time_tsm/scripts/transfer_pretrain_finetune.sh', 'a') as f:
f.write(
'python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
'--dataroot /SSD/lz/UCRArchive_2018 '
'--dataset ' + dataset
+ ' --mode pretrain --epoch 2000 --classifier linear' +
' --loss reconstruction --decoder_backbone fcn --cuda cuda:1' + ';\n')
i = 0
for dataset in source_datasets: ## rec rnn pretrain
print("i = ", i, "dataset_name = ", dataset)
i = i + 1
'''
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode pretrain --epoch 20 --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
'''
with open('/SSD/lz/time_tsm/scripts/transfer_pretrain_finetune.sh', 'a') as f:
f.write(
'python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
'--dataroot /SSD/lz/UCRArchive_2018 '
'--dataset ' + dataset
+ ' --mode pretrain --epoch 2000 --classifier linear' +
' --loss reconstruction --decoder_backbone rnn --cuda cuda:1' + ';\n')
i = 0
for source_dataset in source_datasets: ## cls finetune
print("i = ", i, "dataset_name = ", source_dataset)
i = i + 1
for target_dataset in target_datasets:
### finetune cls
'''
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode finetune --epoch 20 --loss cross_entropy --source_dataset Coffee --transfer_strategy classification --cuda cuda:1 --save_csv_name test_fcn_nonlin_single_norm_0409_;
'''
with open('/SSD/lz/time_tsm/scripts/transfer_pretrain_finetune.sh', 'a') as f:
f.write(
'python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
'--dataroot /SSD/lz/UCRArchive_2018 '
'--dataset ' + target_dataset
+ ' --mode finetune --epoch 1000 --classifier linear' +
' --loss cross_entropy --source_dataset ' + source_dataset + ' --transfer_strategy classification '
'--cuda cuda:1 --save_csv_name ' + source_dataset + '_finetune_cls_0409_' + ';\n')
i = 0
for source_dataset in source_datasets: ## rec fcn finetune
print("i = ", i, "dataset_name = ", source_dataset)
i = i + 1
for target_dataset in target_datasets:
### finetune rec fcn
'''
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode finetune --epoch 20 --loss cross_entropy --decoder_backbone fcn --source_dataset Coffee --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name test_fcn_nonlin_single_norm_0409_;
'''
with open('/SSD/lz/time_tsm/scripts/transfer_pretrain_finetune.sh', 'a') as f:
f.write(
'python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
'--dataroot /SSD/lz/UCRArchive_2018 '
'--dataset ' + target_dataset
+ ' --mode finetune --epoch 1000 --classifier linear' +
' --loss cross_entropy --decoder_backbone fcn --source_dataset ' + source_dataset + ' --transfer_strategy reconstruction '
'--cuda cuda:1 --save_csv_name ' + source_dataset + '_finetune_rec_fcn_0409_' + ';\n')
i = 0
for source_dataset in source_datasets: ## rec rnn finetune
print("i = ", i, "dataset_name = ", source_dataset)
i = i + 1
for target_dataset in target_datasets:
### finetune rec rnn
'''
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --mode finetune --epoch 20 --loss cross_entropy --decoder_backbone rnn --source_dataset Coffee --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name test_fcn_nonlin_single_norm_0409_;
'''
with open('/SSD/lz/time_tsm/scripts/transfer_pretrain_finetune.sh', 'a') as f:
f.write(
'python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
'--dataroot /SSD/lz/UCRArchive_2018 '
'--dataset ' + target_dataset
+ ' --mode finetune --epoch 1000 --classifier linear' +
' --loss cross_entropy --decoder_backbone rnn --source_dataset ' + source_dataset + ' --transfer_strategy reconstruction '
'--cuda cuda:1 --save_csv_name ' + source_dataset + '_finetune_rec_rnn_0409_' + ';\n')
## nohup ./scripts/transfer_pretrain_finetune.sh &
================================================
FILE: ts_classification_methods/scripts/transfer_pretrain_finetune.sh
================================================
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Crop --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ElectricDevices --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset StarLightCurves --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wafer --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ECG5000 --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset TwoPatterns --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FordA --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset UWaveGestureLibraryAll --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset UWaveGestureLibraryX --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset UWaveGestureLibraryY --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset UWaveGestureLibraryZ --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FordB --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ChlorineConcentration --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset NonInvasiveFetalECGThorax1 --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task classification --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset NonInvasiveFetalECGThorax2 --mode pretrain --epoch 2000 --classifier linear --loss cross_entropy --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Crop --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ElectricDevices --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset StarLightCurves --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wafer --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ECG5000 --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset TwoPatterns --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FordA --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset UWaveGestureLibraryAll --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset UWaveGestureLibraryX --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset UWaveGestureLibraryY --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset UWaveGestureLibraryZ --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FordB --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ChlorineConcentration --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset NonInvasiveFetalECGThorax1 --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset NonInvasiveFetalECGThorax2 --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone fcn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Crop --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ElectricDevices --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset StarLightCurves --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wafer --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ECG5000 --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset TwoPatterns --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FordA --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset UWaveGestureLibraryAll --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset UWaveGestureLibraryX --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset UWaveGestureLibraryY --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset UWaveGestureLibraryZ --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FordB --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ChlorineConcentration --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset NonInvasiveFetalECGThorax1 --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task reconstruction --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset NonInvasiveFetalECGThorax2 --mode pretrain --epoch 2000 --classifier linear --loss reconstruction --decoder_backbone rnn --cuda cuda:1;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Crop --transfer_strategy classification --cuda cuda:1 --save_csv_name Crop_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ElectricDevices --transfer_strategy classification --cuda cuda:1 --save_csv_name ElectricDevices_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset StarLightCurves --transfer_strategy classification --cuda cuda:1 --save_csv_name StarLightCurves_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset Wafer --transfer_strategy classification --cuda cuda:1 --save_csv_name Wafer_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ECG5000 --transfer_strategy classification --cuda cuda:1 --save_csv_name ECG5000_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset TwoPatterns --transfer_strategy classification --cuda cuda:1 --save_csv_name TwoPatterns_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordA --transfer_strategy classification --cuda cuda:1 --save_csv_name FordA_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryAll --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryX --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryY --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset UWaveGestureLibraryZ --transfer_strategy classification --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset FordB --transfer_strategy classification --cuda cuda:1 --save_csv_name FordB_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset ChlorineConcentration --transfer_strategy classification --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy classification --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_cls_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone fcn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_fcn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Crop --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Crop_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ElectricDevices --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ElectricDevices_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset StarLightCurves --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name StarLightCurves_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset Wafer --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name Wafer_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ECG5000 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ECG5000_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset TwoPatterns --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name TwoPatterns_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordA --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordA_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryAll --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryAll_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryX --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryX_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryY --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryY_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset UWaveGestureLibraryZ --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name UWaveGestureLibraryZ_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset FordB --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name FordB_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset ChlorineConcentration --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name ChlorineConcentration_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax1 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax1_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BirdChicken --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset BeetleFly --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Coffee --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset OliveOil --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Beef --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Rock --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ShakeGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PickupGestureWiimoteZ --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Wine --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceFour --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Meat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Car --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning2 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Herring --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Lightning7 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Earthquakes --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Haptics --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Computers --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset DistalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MiddlePhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SyntheticControl --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxTW --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset ProximalPhalanxOutlineAgeGroup --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SonyAIBORobotSurface1 --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InlineSkate --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGVerticalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset EOGHorizontalSignal --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset SmallKitchenAppliances --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MoteStrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset HandOutlines --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset CinCECGTorso --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Phoneme --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset InsectWingbeatSound --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FacesUCR --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FaceAll --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Mallat --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset PhalangesOutlinesCorrect --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerSmallTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MixedShapesRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset FreezerRegularTrain --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset Yoga --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
python train.py --backbone fcn --task classification --classifier linear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataroot /SSD/lz/UCRArchive_2018 --dataset MelbournePedestrian --mode finetune --epoch 1000 --classifier linear --loss cross_entropy --decoder_backbone rnn --source_dataset NonInvasiveFetalECGThorax2 --transfer_strategy reconstruction --cuda cuda:1 --save_csv_name NonInvasiveFetalECGThorax2_finetune_rec_rnn_0409_;
================================================
FILE: ts_classification_methods/selftime_cls/__init__.py
================================================
================================================
FILE: ts_classification_methods/selftime_cls/config/CricketX_config.json
================================================
{
"piece_size": 0.2,
"class_type": "3C"
}
================================================
FILE: ts_classification_methods/selftime_cls/config/DodgerLoopDay_config.json
================================================
{
"piece_size": 0.35,
"class_type": "5C"
}
================================================
FILE: ts_classification_methods/selftime_cls/config/InsectWingbeatSound_config.json
================================================
{
"piece_size": 0.4,
"class_type": "6C"
}
================================================
FILE: ts_classification_methods/selftime_cls/config/MFPT_config.json
================================================
{
"piece_size": 0.2,
"class_type": "4C"
}
================================================
FILE: ts_classification_methods/selftime_cls/config/UWaveGestureLibraryAll_config.json
================================================
{
"piece_size": 0.2,
"class_type": "4C"
}
================================================
FILE: ts_classification_methods/selftime_cls/config/XJTU_config.json
================================================
{
"piece_size": 0.2,
"class_type": "4C"
}
================================================
FILE: ts_classification_methods/selftime_cls/dataloader/TSC_data_loader.py
================================================
from sklearn import preprocessing
import numpy as np
def set_nan_to_zero(a):
where_are_NaNs = np.isnan(a)
a[where_are_NaNs] = 0
return a
def TSC_data_loader(dataset_path,dataset_name):
print("[INFO] {}".format(dataset_name))
Train_dataset = np.loadtxt(
dataset_path + '/' + dataset_name + '/' + dataset_name + '_TRAIN.tsv')
Test_dataset = np.loadtxt(
dataset_path + '/' + dataset_name + '/' + dataset_name + '_TEST.tsv')
Train_dataset = Train_dataset.astype(np.float32)
Test_dataset = Test_dataset.astype(np.float32)
X_train = Train_dataset[:, 1:]
y_train = Train_dataset[:, 0:1]
X_test = Test_dataset[:, 1:]
y_test = Test_dataset[:, 0:1]
le = preprocessing.LabelEncoder()
le.fit(np.squeeze(y_train, axis=1))
y_train = le.transform(np.squeeze(y_train, axis=1))
y_test = le.transform(np.squeeze(y_test, axis=1))
return set_nan_to_zero(X_train), y_train, set_nan_to_zero(X_test), y_test
================================================
FILE: ts_classification_methods/selftime_cls/dataloader/__init__.py
================================================
# -*- coding: utf-8 -*-
================================================
FILE: ts_classification_methods/selftime_cls/dataloader/ucr2018.py
================================================
# -*- coding: utf-8 -*-
import numpy as np
import torch.utils.data as data
'''
from TSC_data_loader import TSC_data_loader
from dataprepare import *
import sys
sys.path.append('..')
'''
import utils.datasets as ds
import torch
from dataloader.TSC_data_loader import TSC_data_loader
class UCR2018(data.Dataset):
def __init__(self, data, targets, transform):
self.data = np.asarray(data, dtype=np.float32)
self.targets = np.asarray(targets, dtype=np.int64)
self.transform = transform
def __getitem__(self, index):
img, target = self.data[index], self.targets[index]
if self.transform is not None:
img_transformed = self.transform(img.copy())
else:
img_transformed = img
return img_transformed, target
def __len__(self):
return self.data.shape[0]
class MultiUCR2018_Intra(data.Dataset):
def __init__(self, data, targets, K, transform, transform_cut, totensor_transform):
self.data = np.asarray(data, dtype=np.float32)
self.targets = np.asarray(targets, dtype=np.int16)
self.K = K # tot number of augmentations
self.transform = transform
self.transform_cut = transform_cut
self.totensor_transform = totensor_transform
def __getitem__(self, index):
# print("### {}".format(index))
img, target = self.data[index], self.targets[index]
img_list0 = list()
img_list1 = list()
label_list = list()
if self.transform is not None:
for _ in range(self.K):
img_transformed = self.transform(img.copy())
img_cut0, img_cut1, label = self.transform_cut(img_transformed)
img_list0.append(self.totensor_transform(img_cut0))
img_list1.append(self.totensor_transform(img_cut1))
label = torch.from_numpy(np.array(label)).cuda()
label_list.append(label)
return img_list0, img_list1, label_list, target
def __len__(self):
return self.data.shape[0]
class MultiUCR2018_InterIntra(data.Dataset):
def __init__(self, data, targets, K, transform, transform_cut, totensor_transform):
self.data = np.asarray(data, dtype=np.float32)
self.targets = np.asarray(targets, dtype=np.int16)
self.K = K # tot number of augmentations
self.transform = transform
self.transform_cut = transform_cut
self.totensor_transform = totensor_transform
def __getitem__(self, index):
# print("### {}".format(index))
img, target = self.data[index], self.targets[index]
img_list = list()
img_list0 = list()
img_list1 = list()
label_list = list()
if self.transform is not None:
for _ in range(self.K):
img_transformed = self.transform(img.copy())
img_cut0, img_cut1, label = self.transform_cut(img_transformed)
img_list.append(self.totensor_transform(img_transformed))
img_list0.append(self.totensor_transform(img_cut0))
img_list1.append(self.totensor_transform(img_cut1))
label = torch.from_numpy(np.array(label)).cuda()
label_list.append(label)
#label_list = torch.from_numpy(np.array(label_list)).cuda()
return img_list, img_list0, img_list1, label_list, target
def __len__(self):
return self.data.shape[0]
class MultiUCR2018(data.Dataset):
def __init__(self, data, targets, K, transform):
self.data = np.asarray(data, dtype=np.float32)
self.targets = np.asarray(targets, dtype=np.int16)
self.K = K # tot number of augmentations
self.transform = transform
def __getitem__(self, index):
# print("### {}".format(index))
img, target = self.data[index], self.targets[index]
img_list = list()
if self.transform is not None:
for _ in range(self.K):
img_transformed = self.transform(img.copy())
img_list.append(img_transformed)
else:
img_list = img
return img_list, target
def __len__(self):
return self.data.shape[0]
def load_ucr2018(dataset_path, dataset_name):
##################
# load raw data
##################
nb_class = ds.nb_classes(dataset_name)
nb_dims = ds.nb_dims(dataset_name)
if dataset_name in ['MFPT', 'XJTU']:
x = np.load("{}/{}/{}_data.npy".format(dataset_path, dataset_name, dataset_name))
y = np.load("{}/{}/{}_label.npy".format(dataset_path, dataset_name, dataset_name))
(x_train, x_test)=(x[:100], x[100:])
(y_train, y_test)=(y[:100], y[100:])
else:
x_train, y_train, x_test, y_test = TSC_data_loader(dataset_path, dataset_name)
nb_timesteps = int(x_train.shape[1] / nb_dims)
input_shape = (nb_timesteps, nb_dims)
############################################
# Combine all train and test data for resample
############################################
x_all = np.concatenate((x_train, x_test), axis=0)
y_all = np.concatenate((y_train, y_test), axis=0)
ts_idx = list(range(x_all.shape[0]))
np.random.shuffle(ts_idx)
x_all = x_all[ts_idx]
y_all = y_all[ts_idx]
label_idxs = np.unique(y_all)
class_stat_all = {}
for idx in label_idxs:
class_stat_all[idx] = len(np.where(y_all == idx)[0])
print("[Stat] All class: {}".format(class_stat_all))
test_idx = []
val_idx = []
train_idx = []
for idx in label_idxs:
target = list(np.where(y_all == idx)[0])
nb_samp = int(len(target))
test_idx += target[:int(nb_samp * 0.25)]
val_idx += target[int(nb_samp * 0.25):int(nb_samp * 0.5)]
train_idx += target[int(nb_samp * 0.5):]
x_test = x_all[test_idx]
y_test = y_all[test_idx]
x_val = x_all[val_idx]
y_val = y_all[val_idx]
x_train = x_all[train_idx]
y_train = y_all[train_idx]
label_idxs = np.unique(y_train)
class_stat = {}
for idx in label_idxs:
class_stat[idx] = len(np.where(y_train == idx)[0])
# print("[Stat] Train class: {}".format(class_stat))
print("[Stat] Train class: mean={}, std={}".format(np.mean(list(class_stat.values())),
np.std(list(class_stat.values()))))
label_idxs = np.unique(y_val)
class_stat = {}
for idx in label_idxs:
class_stat[idx] = len(np.where(y_val == idx)[0])
# print("[Stat] Test class: {}".format(class_stat))
print("[Stat] Val class: mean={}, std={}".format(np.mean(list(class_stat.values())),
np.std(list(class_stat.values()))))
label_idxs = np.unique(y_test)
class_stat = {}
for idx in label_idxs:
class_stat[idx] = len(np.where(y_test == idx)[0])
# print("[Stat] Test class: {}".format(class_stat))
print("[Stat] Test class: mean={}, std={}".format(np.mean(list(class_stat.values())),
np.std(list(class_stat.values()))))
########################################
# Data Split End
########################################
# Process data
x_test = x_test.reshape((-1, input_shape[0], input_shape[1]))
x_val = x_val.reshape((-1, input_shape[0], input_shape[1]))
x_train = x_train.reshape((-1, input_shape[0], input_shape[1]))
print("Train:{}, Test:{}, Class:{}".format(x_train.shape, x_test.shape, nb_class))
# Normalize
x_train_max = np.max(x_train)
x_train_min = np.min(x_train)
x_train = 2. * (x_train - x_train_min) / (x_train_max - x_train_min) - 1.
# Test is secret
x_val = 2. * (x_val - x_train_min) / (x_train_max - x_train_min) - 1.
x_test = 2. * (x_test - x_train_min) / (x_train_max - x_train_min) - 1.
return x_train, y_train, x_val, y_val, x_test, y_test, nb_class, class_stat_all
if __name__ == '__main__':
x_train, y_train, x_val, y_val, x_test, y_test, nb_class, class_stat_all = load_ucr2018('/dev_data/zzj/hzy/datasets/UCR', 'Crop')
print(y_train[0].shape)
x_train, y_train, x_val = load_data('/dev_data/zzj/hzy/datasets/UCR', 'Crop')
print(y_train[0].shape)
================================================
FILE: ts_classification_methods/selftime_cls/dataprepare.py
================================================
import pandas as pd
from sklearn.model_selection import StratifiedShuffleSplit, StratifiedKFold
from sklearn import preprocessing
import numpy as np
import os
def load_data(dataroot, dataset):
train = pd.read_csv(os.path.join(dataroot, dataset, dataset+'_TRAIN.tsv'), sep='\t', header=None)
train_x = train.iloc[:, 1:]
train_target = train.iloc[:, 0]
test = pd.read_csv(os.path.join(dataroot, dataset, dataset+'_TEST.tsv'), sep='\t', header=None)
test_x = test.iloc[:, 1:]
test_target = test.iloc[:, 0]
sum_dataset = pd.concat([train_x, test_x]).to_numpy(np.float32)
#sum_dataset = sum_dataset.fillna(sum_dataset.mean()).to_numpy(dtype=np.float32)
sum_target = pd.concat([train_target, test_target]).to_numpy(np.float32)
# sum_target = sum_target.fillna(sum_target.mean()).to_numpy(dtype=np.float32)
num_classes = len(np.unique(sum_target))
sum_target = transfer_labels(sum_target)
sum_dataset = np.expand_dims(sum_dataset, 2)
return sum_dataset, sum_target, num_classes
def transfer_labels(labels):
indicies = np.unique(labels)
num_samples = labels.shape[0]
for i in range(num_samples):
new_label = np.argwhere(labels[i] == indicies)[0][0]
labels[i] = new_label
return labels
def k_fold(data, target):
skf = StratifiedKFold(5, shuffle=True, random_state=42)
#skf = StratifiedShuffleSplit(5)
train_sets = []
train_targets = []
val_sets = []
val_targets = []
test_sets = []
test_targets = []
for raw_index, test_index in skf.split(data, target):
raw_set = data[raw_index]
raw_target = target[raw_index]
test_sets.append(data[test_index])
test_targets.append(target[test_index])
train_index, val_index = next(StratifiedKFold(4, shuffle=True, random_state=42).split(raw_set, raw_target))
# train_index, val_index = next(StratifiedShuffleSplit(1).split(raw_set, raw_target))
train_sets.append(raw_set[train_index])
train_targets.append(raw_target[train_index])
val_sets.append(raw_set[val_index])
val_targets.append(raw_target[val_index])
return np.array(train_sets), np.array(train_targets), np.array(val_sets), np.array(val_targets), np.array(test_sets), np.array(test_targets)
def normalize_per_series(data):
std_ = data.std(axis=1, keepdims=True)
return (data - data.mean(axis=1, keepdims=True)) / std_
def fill_nan_value(train_set, val_set, test_set):
ind = np.where(np.isnan(train_set))
col_mean = np.nanmean(train_set, axis=0)
col_mean[np.isnan(col_mean)] = 1e-6
train_set[ind] = np.take(col_mean, ind[1])
ind_val = np.where(np.isnan(val_set))
val_set[ind_val] = np.take(col_mean, ind_val[1])
ind_test = np.where(np.isnan(test_set))
test_set[ind_test] = np.take(col_mean, ind_test[1])
return train_set, val_set, test_set
================================================
FILE: ts_classification_methods/selftime_cls/evaluation/__init__.py
================================================
# -*- coding: utf-8 -*-
================================================
FILE: ts_classification_methods/selftime_cls/evaluation/eval_ssl.py
================================================
# -*- coding: utf-8 -*-
import torch
import utils.transforms as transforms
from dataloader.ucr2018 import UCR2018
import torch.utils.data as data
from optim.pytorchtools import EarlyStopping
from model.model_backbone import SimConv4
def evaluation(x_train, y_train, x_val, y_val, x_test, y_test, nb_class, ckpt, opt, in_channel=1, ckpt_tosave=None, my_state=None):
# no augmentations used for linear evaluation
transform_lineval = transforms.Compose([transforms.ToTensor()])
train_set_lineval = UCR2018(data=x_train, targets=y_train, transform=transform_lineval)
val_set_lineval = UCR2018(data=x_val, targets=y_val, transform=transform_lineval)
test_set_lineval = UCR2018(data=x_test, targets=y_test, transform=transform_lineval)
train_loader_lineval = torch.utils.data.DataLoader(train_set_lineval, batch_size=128, shuffle=True)
val_loader_lineval = torch.utils.data.DataLoader(val_set_lineval, batch_size=128, shuffle=False)
test_loader_lineval = torch.utils.data.DataLoader(test_set_lineval, batch_size=128, shuffle=False)
signal_length = x_train.shape[1]
# loading the saved backbone
backbone_lineval = SimConv4(in_channel).cuda() # defining a raw backbone model
# backbone_lineval = OS_CNN(signal_length).cuda() # defining a raw backbone model
# 64 are the number of output features in the backbone, and 10 the number of classes
linear_layer = torch.nn.Linear(opt.feature_size, nb_class).cuda()
# linear_layer = torch.nn.Linear(backbone_lineval.rep_dim, nb_class).cuda()
if ckpt != None:
checkpoint = torch.load(ckpt, map_location='cpu')
else:
checkpoint = my_state
backbone_lineval.load_state_dict(checkpoint)
if ckpt_tosave:
torch.save(backbone_lineval.state_dict(), ckpt_tosave)
optimizer = torch.optim.Adam(linear_layer.parameters(), lr=0.5)#lr=opt.learning_rate_test)
CE = torch.nn.CrossEntropyLoss()
early_stopping = EarlyStopping(100, verbose=True) # opt.patience
best_acc = 0
best_epoch = 0
print('Linear evaluation')
for epoch in range(400): # opt.epoch_test
linear_layer.train()
backbone_lineval.eval()
acc_trains = list()
for i, (data, target) in enumerate(train_loader_lineval):
optimizer.zero_grad()
data = data.cuda()
target = target.cuda()
output = backbone_lineval(data).detach()
output = linear_layer(output)
loss = CE(output, target)
loss.backward()
optimizer.step()
# estimate the accuracy
prediction = output.argmax(-1)
correct = prediction.eq(target.view_as(prediction)).sum()
accuracy = (100.0 * correct / len(target))
acc_trains.append(accuracy.item())
print('[Train-{}][{}] loss: {:.5f}; \t Acc: {:.2f}%' \
.format(epoch + 1, opt.model_name, loss.item(), sum(acc_trains) / len(acc_trains)))
acc_vals = list()
acc_tests = list()
linear_layer.eval()
with torch.no_grad():
for i, (data, target) in enumerate(val_loader_lineval):
data = data.cuda()
target = target.cuda()
output = backbone_lineval(data).detach()
output = linear_layer(output)
# estimate the accuracy
prediction = output.argmax(-1)
correct = prediction.eq(target.view_as(prediction)).sum()
accuracy = (100.0 * correct / len(target))
acc_vals.append(accuracy.item())
val_acc = sum(acc_vals) / len(acc_vals)
if val_acc > best_acc:
best_acc = val_acc
best_epoch = epoch
for i, (data, target) in enumerate(test_loader_lineval):
data = data.cuda()
target = target.cuda()
output = backbone_lineval(data).detach()
output = linear_layer(output)
# estimate the accuracy
prediction = output.argmax(-1)
correct = prediction.eq(target.view_as(prediction)).sum()
accuracy = (100.0 * correct / len(target))
acc_tests.append(accuracy.item())
test_acc = sum(acc_tests) / len(acc_tests)
print('[Test-{}] Val ACC:{:.2f}%, Best Test ACC.: {:.2f}% in Epoch {}'.format(
epoch, val_acc, test_acc, best_epoch))
early_stopping(val_acc, None)
if early_stopping.early_stop:
print("Early stopping")
break
return test_acc, best_epoch
================================================
FILE: ts_classification_methods/selftime_cls/model/__init__.py
================================================
# -*- coding: utf-8 -*-
================================================
FILE: ts_classification_methods/selftime_cls/model/model_RelationalReasoning.py
================================================
# -*- coding: utf-8 -*-
import torch
from optim.pytorchtools import EarlyStopping
import torch.nn as nn
class RelationalReasoning(torch.nn.Module):
def __init__(self, backbone, feature_size=64):
super(RelationalReasoning, self).__init__()
self.backbone = backbone
self.relation_head = torch.nn.Sequential(
torch.nn.Linear(feature_size*2, 256),
torch.nn.BatchNorm1d(256),
torch.nn.LeakyReLU(),
torch.nn.Linear(256, 1))
def aggregate(self, features, K):
relation_pairs_list = list()
targets_list = list()
size = int(features.shape[0] / K)
shifts_counter=1
for index_1 in range(0, size*K, size):
for index_2 in range(index_1+size, size*K, size):
# Using the 'cat' aggregation function by default
pos1 = features[index_1:index_1 + size]
pos2 = features[index_2:index_2+size]
pos_pair = torch.cat([pos1,
pos2], 1) # (batch_size, fz*2)
# Shuffle without collisions by rolling the mini-batch (negatives)
neg1 = torch.roll(features[index_2:index_2 + size],
shifts=shifts_counter, dims=0)
neg_pair1 = torch.cat([pos1, neg1], 1) # (batch_size, fz*2)
relation_pairs_list.append(pos_pair)
relation_pairs_list.append(neg_pair1)
targets_list.append(torch.ones(size, dtype=torch.float32).cuda())
targets_list.append(torch.zeros(size, dtype=torch.float32).cuda())
shifts_counter+=1
if(shifts_counter>=size):
shifts_counter=1 # avoid identity pairs
relation_pairs = torch.cat(relation_pairs_list, 0).cuda() # K(K-1) * (batch_size, fz*2)
targets = torch.cat(targets_list, 0).cuda()
return relation_pairs, targets
def train(self, tot_epochs, train_loader, opt):
patience = opt.patience
early_stopping = EarlyStopping(patience, verbose=True,
checkpoint_pth='{}/backbone_best.tar'.format(opt.ckpt_dir))
optimizer = torch.optim.Adam([
{'params': self.backbone.parameters()},
{'params': self.relation_head.parameters()}], lr=opt.learning_rate)
BCE = torch.nn.BCEWithLogitsLoss()
self.backbone.train()
self.relation_head.train()
epoch_max = 0
acc_max=0
for epoch in range(tot_epochs):
acc_epoch=0
loss_epoch=0
# the real target is discarded (unsupervised)
for i, (data_augmented, _) in enumerate(train_loader):
K = len(data_augmented) # tot augmentations
x = torch.cat(data_augmented, 0).cuda()
optimizer.zero_grad()
# forward pass (backbone)
features = self.backbone(x)
# aggregation function
relation_pairs, targets = self.aggregate(features, K)
# forward pass (relation head)
score = self.relation_head(relation_pairs).squeeze()
# cross-entropy loss and backward
loss = BCE(score, targets)
loss.backward()
optimizer.step()
# estimate the accuracy
predicted = torch.round(torch.sigmoid(score))
correct = predicted.eq(targets.view_as(predicted)).sum()
accuracy = (100.0 * correct / float(len(targets)))
acc_epoch += accuracy.item()
loss_epoch += loss.item()
acc_epoch /= len(train_loader)
loss_epoch /= len(train_loader)
if acc_epoch>acc_max:
acc_max = acc_epoch
epoch_max = epoch
early_stopping(acc_epoch, self.backbone)
if early_stopping.early_stop:
print("Early stopping")
break
if (epoch+1)%opt.save_freq==0:
print("[INFO] save backbone at epoch {}!".format(epoch))
torch.save(self.backbone.state_dict(), '{}/backbone_{}.tar'.format(opt.ckpt_dir, epoch))
print('Epoch [{}][{}][{}] loss= {:.5f}; Epoch ACC.= {:.2f}%, Max ACC.= {:.1f}%, Max Epoch={}' \
.format(epoch + 1, opt.model_name, opt.dataset_name,
loss_epoch, acc_epoch, acc_max, epoch_max))
return acc_max, epoch_max
class RelationalReasoning_Intra(torch.nn.Module):
def __init__(self, backbone, feature_size=64, nb_class=3):
super(RelationalReasoning_Intra, self).__init__()
self.backbone = backbone
self.cls_head = torch.nn.Sequential(
torch.nn.Linear(feature_size*2, 256),
torch.nn.BatchNorm1d(256),
torch.nn.LeakyReLU(),
torch.nn.Linear(256, nb_class),
torch.nn.Softmax(),
)
def run_test(self, predict, labels):
correct = 0
pred = predict.data.max(1)[1]
correct = pred.eq(labels.data).cpu().sum()
return correct, len(labels.data)
def train(self, tot_epochs, train_loader, opt):
patience = opt.patience
early_stopping = EarlyStopping(patience, verbose=True,
checkpoint_pth='{}/backbone_best.tar'.format(opt.ckpt_dir))
optimizer = torch.optim.Adam([
{'params': self.backbone.parameters()},
{'params': self.cls_head.parameters()},
], lr=opt.learning_rate)
c_criterion = nn.CrossEntropyLoss()
self.backbone.train()
self.cls_head.train()
epoch_max = 0
acc_max=0
for epoch in range(tot_epochs):
acc_epoch=0
acc_epoch_cls=0
loss_epoch=0
# the real target is discarded (unsupervised)
for i, (data_augmented0, data_augmented1, data_label, _) in enumerate(train_loader):
K = len(data_augmented0) # tot augmentations
x_cut0 = torch.cat(data_augmented0, 0).cuda()
x_cut1 = torch.cat(data_augmented1, 0).cuda()
c_label = torch.cat(data_label, 0).cuda()
optimizer.zero_grad()
# forward pass (backbone)
features_cut0 = self.backbone(x_cut0)
features_cut1 = self.backbone(x_cut1)
features_cls = torch.cat([features_cut0, features_cut1], 1)
# score_intra = self.relation_head(relation_pairs_intra).squeeze()
c_output = self.cls_head(features_cls)
correct_cls, length_cls = self.run_test(c_output, c_label)
loss_c = c_criterion(c_output, c_label)
loss=loss_c
loss.backward()
optimizer.step()
# estimate the accuracy
loss_epoch += loss.item()
accuracy_cls = 100. * correct_cls / length_cls
acc_epoch_cls += accuracy_cls.item()
acc_epoch_cls /= len(train_loader)
loss_epoch /= len(train_loader)
if acc_epoch_cls>acc_max:
acc_max = acc_epoch_cls
epoch_max = epoch
early_stopping(acc_epoch_cls, self.backbone)
if early_stopping.early_stop:
print("Early stopping")
break
if (epoch+1)%opt.save_freq==0:
print("[INFO] save backbone at epoch {}!".format(epoch))
torch.save(self.backbone.state_dict(), '{}/backbone_{}.tar'.format(opt.ckpt_dir, epoch))
print('Epoch [{}][{}][{}] loss= {:.5f}; Epoch ACC.= {:.2f}%, CLS.= {:.2f}%, '
'Max ACC.= {:.1f}%, Max Epoch={}' \
.format(epoch + 1, opt.model_name, opt.dataset_name,
loss_epoch, acc_epoch,acc_epoch_cls, acc_max, epoch_max))
return acc_max, epoch_max
class RelationalReasoning_InterIntra(torch.nn.Module):
def __init__(self, backbone, feature_size=64, nb_class=3):
super(RelationalReasoning_InterIntra, self).__init__()
self.backbone = backbone
self.relation_head = torch.nn.Sequential(
torch.nn.Linear(feature_size*2, 256),
torch.nn.BatchNorm1d(256),
torch.nn.LeakyReLU(),
torch.nn.Linear(256, 1))
self.cls_head = torch.nn.Sequential(
torch.nn.Linear(feature_size*2, 256),
torch.nn.BatchNorm1d(256),
torch.nn.LeakyReLU(),
torch.nn.Linear(256, nb_class),
torch.nn.Softmax(),
)
# self.softmax = nn.Softmax()
def aggregate(self, features, K):
relation_pairs_list = list()
targets_list = list()
size = int(features.shape[0] / K)
shifts_counter=1
for index_1 in range(0, size*K, size):
for index_2 in range(index_1+size, size*K, size):
# Using the 'cat' aggregation function by default
pos1 = features[index_1:index_1 + size]
pos2 = features[index_2:index_2+size]
pos_pair = torch.cat([pos1,
pos2], 1) # (batch_size, fz*2)
# Shuffle without collisions by rolling the mini-batch (negatives)
neg1 = torch.roll(features[index_2:index_2 + size],
shifts=shifts_counter, dims=0)
neg_pair1 = torch.cat([pos1, neg1], 1) # (batch_size, fz*2)
relation_pairs_list.append(pos_pair)
relation_pairs_list.append(neg_pair1)
targets_list.append(torch.ones(size, dtype=torch.float32).cuda())
targets_list.append(torch.zeros(size, dtype=torch.float32).cuda())
shifts_counter+=1
if(shifts_counter>=size):
shifts_counter=1 # avoid identity pairs
relation_pairs = torch.cat(relation_pairs_list, 0).cuda() # K(K-1) * (batch_size, fz*2)
targets = torch.cat(targets_list, 0).cuda()
return relation_pairs, targets
def run_test(self, predict, labels):
correct = 0
pred = predict.data.max(1)[1]
correct = pred.eq(labels.data).cpu().sum()
return correct, len(labels.data)
def train(self, tot_epochs, train_loader, opt):
patience = opt.patience
early_stopping = EarlyStopping(patience, verbose=True,
checkpoint_pth='{}/backbone_best.tar'.format(opt.ckpt_dir))
optimizer = torch.optim.Adam([
{'params': self.backbone.parameters()},
{'params': self.relation_head.parameters()},
{'params': self.cls_head.parameters()},
], lr=opt.learning_rate)
BCE = torch.nn.BCEWithLogitsLoss()
c_criterion = nn.CrossEntropyLoss()
self.backbone.train()
self.relation_head.train()
self.cls_head.train()
epoch_max = 0
acc_max=0
for epoch in range(tot_epochs):
acc_epoch=0
acc_epoch_cls=0
loss_epoch=0
# the real target is discarded (unsupervised)
for i, (data, data_augmented0, data_augmented1, data_label, _) in enumerate(train_loader):
K = len(data) # tot augmentations
x = torch.cat(data, 0)
x_cut0 = torch.cat(data_augmented0, 0)
x_cut1 = torch.cat(data_augmented1, 0)
c_label = torch.cat(data_label, 0)
optimizer.zero_grad()
# forward pass (backbone)
features = self.backbone(x)
features_cut0 = self.backbone(x_cut0)
features_cut1 = self.backbone(x_cut1)
features_cls = torch.cat([features_cut0, features_cut1], 1)
# aggregation function
relation_pairs, targets = self.aggregate(features, K)
# relation_pairs_intra, targets_intra = self.aggregate_intra(features_cut0, features_cut1, K)
# forward pass (relation head)
score = self.relation_head(relation_pairs).squeeze()
c_output = self.cls_head(features_cls)
correct_cls, length_cls = self.run_test(c_output, c_label)
# cross-entropy loss and backward
loss = BCE(score, targets)
loss_c = c_criterion(c_output, c_label)
loss+=loss_c
loss.backward()
optimizer.step()
# estimate the accuracy
predicted = torch.round(torch.sigmoid(score))
correct = predicted.eq(targets.view_as(predicted)).sum()
accuracy = (100.0 * correct / float(len(targets)))
acc_epoch += accuracy.item()
loss_epoch += loss.item()
accuracy_cls = 100. * correct_cls / length_cls
acc_epoch_cls += accuracy_cls.item()
acc_epoch /= len(train_loader)
acc_epoch_cls /= len(train_loader)
loss_epoch /= len(train_loader)
if (acc_epoch+acc_epoch_cls)>acc_max:
acc_max = (acc_epoch+acc_epoch_cls)
epoch_max = epoch
early_stopping((acc_epoch+acc_epoch_cls), self.backbone)
if early_stopping.early_stop:
print("Early stopping")
break
if (epoch+1)%opt.save_freq==0:
print("[INFO] save backbone at epoch {}!".format(epoch))
torch.save(self.backbone.state_dict(), '{}/backbone_{}.tar'.format(opt.ckpt_dir, epoch))
print('Epoch [{}][{}][{}] loss= {:.5f}; Epoch ACC.= {:.2f}%, CLS.= {:.2f}%, '
'Max ACC.= {:.1f}%, Max Epoch={}' \
.format(epoch + 1, opt.model_name, opt.dataset_name,
loss_epoch, acc_epoch,acc_epoch_cls, acc_max, epoch_max))
return acc_max, epoch_max
================================================
FILE: ts_classification_methods/selftime_cls/model/model_backbone.py
================================================
# -*- coding: utf-8 -*-
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
class SimConv4(torch.nn.Module):
def __init__(self, in_channel=1, feature_size=64):
super(SimConv4, self).__init__()
self.feature_size = feature_size
self.name = "conv4"
self.in_channel = in_channel
self.layer1 = torch.nn.Sequential(
nn.Conv1d(self.in_channel, 8, 4, 2, 1, bias=False),
torch.nn.BatchNorm1d(8),
torch.nn.ReLU()
)
self.layer2 = torch.nn.Sequential(
nn.Conv1d(8, 16, 4, 2, 1, bias=False),
torch.nn.BatchNorm1d(16),
torch.nn.ReLU(),
)
self.layer3 = torch.nn.Sequential(
nn.Conv1d(16, 32, 4, 2, 1, bias=False),
torch.nn.BatchNorm1d(32),
torch.nn.ReLU(),
)
self.layer4 = torch.nn.Sequential(
nn.Conv1d(32, 64, 3, 2, 1, bias=False),
torch.nn.BatchNorm1d(64),
torch.nn.ReLU(),
torch.nn.AdaptiveAvgPool1d(1)
)
self.flatten = torch.nn.Flatten()
for m in self.modules():
if isinstance(m, torch.nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, torch.nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
if isinstance(m, nn.Conv1d):
nn.init.xavier_normal_(m.weight.data)
# nn.init.xavier_normal_(m.bias.data)
elif isinstance(m, nn.BatchNorm1d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def forward(self, x):
x_ = x.view(x.shape[0], 1, -1)
h = self.layer1(x_) # (B, 1, D)->(B, 8, D/2)
h = self.layer2(h) # (B, 8, D/2)->(B, 16, D/4)
h = self.layer3(h) # (B, 16, D/4)->(B, 32, D/8)
h = self.layer4(h) # (B, 32, D/8)->(B, 64, 1)
h = self.flatten(h)
h = F.normalize(h, dim=1)
return h
================================================
FILE: ts_classification_methods/selftime_cls/optim/__init__.py
================================================
# -*- coding: utf-8 -*-
================================================
FILE: ts_classification_methods/selftime_cls/optim/pretrain.py
================================================
# -*- coding: utf-8 -*-
import torch
import utils.transforms as transforms
from dataloader.ucr2018 import *
import torch.utils.data as data
from model.model_RelationalReasoning import *
from model.model_backbone import SimConv4
def pretrain_IntraSampleRel(x_train, y_train, opt):
K = opt.K
batch_size = opt.batch_size # 128 has been used in the paper
tot_epochs = opt.epochs # 400 has been used in the paper
feature_size = opt.feature_size
ckpt_dir = opt.ckpt_dir
prob = 0.2 # Transform Probability
raw = transforms.Raw()
cutout = transforms.Cutout(sigma=0.1, p=prob)
jitter = transforms.Jitter(sigma=0.2, p=prob)
scaling = transforms.Scaling(sigma=0.4, p=prob)
magnitude_warp = transforms.MagnitudeWrap(sigma=0.3, knot=4, p=prob)
time_warp = transforms.TimeWarp(sigma=0.2, knot=8, p=prob)
window_slice = transforms.WindowSlice(reduce_ratio=0.8, p=prob)
window_warp = transforms.WindowWarp(window_ratio=0.3, scales=(0.5, 2), p=prob)
transforms_list = {'jitter': [jitter],
'cutout': [cutout],
'scaling': [scaling],
'magnitude_warp': [magnitude_warp],
'time_warp': [time_warp],
'window_slice': [window_slice],
'window_warp': [window_warp],
'G0': [jitter, magnitude_warp, window_slice],
'G1': [jitter, time_warp, window_slice],
'G2': [jitter, time_warp, window_slice, window_warp, cutout],
'none': [raw]}
transforms_targets = list()
for name in opt.aug_type:
for item in transforms_list[name]:
transforms_targets.append(item)
train_transform = transforms.Compose(transforms_targets)
if '2C' in opt.class_type:
cut_piece = transforms.CutPiece2C(sigma=opt.piece_size)
nb_class=2
elif '3C' in opt.class_type:
cut_piece = transforms.CutPiece3C(sigma=opt.piece_size)
nb_class=3
elif '4C' in opt.class_type:
cut_piece = transforms.CutPiece4C(sigma=opt.piece_size)
nb_class=4
elif '5C' in opt.class_type:
cut_piece = transforms.CutPiece5C(sigma=opt.piece_size)
nb_class = 5
elif '6C' in opt.class_type:
cut_piece = transforms.CutPiece6C(sigma=opt.piece_size)
nb_class = 6
elif '7C' in opt.class_type:
cut_piece = transforms.CutPiece7C(sigma=opt.piece_size)
nb_class = 7
elif '8C' in opt.class_type:
cut_piece = transforms.CutPiece8C(sigma=opt.piece_size)
nb_class = 8
tensor_transform = transforms.ToTensor()
backbone = SimConv4().cuda()
model = RelationalReasoning_Intra(backbone, feature_size, nb_class).cuda()
train_set = MultiUCR2018_Intra(data=x_train, targets=y_train, K=K,
transform=train_transform, transform_cut=cut_piece,
totensor_transform=tensor_transform)
train_loader = torch.utils.data.DataLoader(train_set,
batch_size=batch_size,
shuffle=True)
torch.save(model.backbone.state_dict(), '{}/backbone_init.tar'.format(ckpt_dir))
acc_max, epoch_max = model.train(tot_epochs=tot_epochs, train_loader=train_loader, opt=opt)
torch.save(model.backbone.state_dict(), '{}/backbone_last.tar'.format(ckpt_dir))
return acc_max, epoch_max
def pretrain_InterSampleRel(x_train, y_train, opt):
K = opt.K
batch_size = opt.batch_size # 128 has been used in the paper
tot_epochs = opt.epochs # 400 has been used in the paper
feature_size = opt.feature_size
ckpt_dir = opt.ckpt_dir
prob = 0.2 # Transform Probability
raw = transforms.Raw()
cutout = transforms.Cutout(sigma=0.1, p=prob)
jitter = transforms.Jitter(sigma=0.2, p=prob)
scaling = transforms.Scaling(sigma=0.4, p=prob)
magnitude_warp = transforms.MagnitudeWrap(sigma=0.3, knot=4, p=prob)
time_warp = transforms.TimeWarp(sigma=0.2, knot=8, p=prob)
window_slice = transforms.WindowSlice(reduce_ratio=0.8, p=prob)
window_warp = transforms.WindowWarp(window_ratio=0.3, scales=(0.5, 2), p=prob)
transforms_list = {'jitter': [jitter],
'cutout': [cutout],
'scaling': [scaling],
'magnitude_warp': [magnitude_warp],
'time_warp': [time_warp],
'window_slice': [window_slice],
'window_warp': [window_warp],
'G0': [jitter, magnitude_warp, window_slice],
'G1': [jitter, time_warp, window_slice],
'G2': [jitter, time_warp, window_slice, window_warp, cutout],
'none': [raw]}
transforms_targets = list()
for name in opt.aug_type:
for item in transforms_list[name]:
transforms_targets.append(item)
train_transform = transforms.Compose(transforms_targets + [transforms.ToTensor()])
backbone = SimConv4().cuda()
model = RelationalReasoning(backbone, feature_size).cuda()
train_set = MultiUCR2018(data=x_train, targets=y_train, K=K, transform=train_transform)
train_loader = torch.utils.data.DataLoader(train_set,
batch_size=batch_size,
shuffle=True)
torch.save(model.backbone.state_dict(), '{}/backbone_init.tar'.format(ckpt_dir))
acc_max, epoch_max = model.train(tot_epochs=tot_epochs, train_loader=train_loader, opt=opt)
torch.save(model.backbone.state_dict(), '{}/backbone_last.tar'.format(ckpt_dir))
return acc_max, epoch_max
def pretrain_SelfTime(x_train, y_train, opt, in_channel=1):
K = opt.K
batch_size = opt.batch_size # 128 has been used in the paper
tot_epochs = opt.epochs # 400 has been used in the paper
feature_size = opt.feature_size
ckpt_dir = opt.ckpt_dir
prob = 0.2 # Transform Probability
cutout = transforms.Cutout(sigma=0.1, p=prob)
jitter = transforms.Jitter(sigma=0.2, p=prob)
scaling = transforms.Scaling(sigma=0.4, p=prob)
magnitude_warp = transforms.MagnitudeWrap(sigma=0.3, knot=4, p=prob)
time_warp = transforms.TimeWarp(sigma=0.2, knot=8, p=prob)
window_slice = transforms.WindowSlice(reduce_ratio=0.8, p=prob)
window_warp = transforms.WindowWarp(window_ratio=0.3, scales=(0.5, 2), p=prob)
transforms_list = {'jitter': jitter,
'cutout': cutout,
'scaling': scaling,
'magnitude_warp': magnitude_warp,
'time_warp': time_warp,
'window_slice': window_slice,
'window_warp': window_warp,
'G0': [jitter, magnitude_warp, window_slice],
'G1': [jitter, time_warp, window_slice],
'G2': [jitter, time_warp, window_slice, window_warp, cutout],
'none': []}
transforms_targets = [transforms_list[name] for name in opt.aug_type]
train_transform = transforms.Compose(transforms_targets)
tensor_transform = transforms.ToTensor()
if '2C' in opt.class_type:
cut_piece = transforms.CutPiece2C(sigma=opt.piece_size)
nb_class=2
elif '3C' in opt.class_type:
cut_piece = transforms.CutPiece3C(sigma=opt.piece_size)
nb_class=3
elif '4C' in opt.class_type:
cut_piece = transforms.CutPiece4C(sigma=opt.piece_size)
nb_class=4
elif '5C' in opt.class_type:
cut_piece = transforms.CutPiece5C(sigma=opt.piece_size)
nb_class = 5
elif '6C' in opt.class_type:
cut_piece = transforms.CutPiece6C(sigma=opt.piece_size)
nb_class = 6
elif '7C' in opt.class_type:
cut_piece = transforms.CutPiece7C(sigma=opt.piece_size)
nb_class = 7
elif '8C' in opt.class_type:
cut_piece = transforms.CutPiece8C(sigma=opt.piece_size)
nb_class = 8
backbone = SimConv4(in_channel).cuda()
model = RelationalReasoning_InterIntra(backbone, feature_size, nb_class).cuda()
train_set = MultiUCR2018_InterIntra(data=x_train, targets=y_train, K=K,
transform=train_transform, transform_cut=cut_piece,
totensor_transform=tensor_transform)
train_loader = torch.utils.data.DataLoader(train_set,
batch_size=batch_size,
shuffle=True)
torch.save(model.backbone.state_dict(), '{}/backbone_init.tar'.format(ckpt_dir))
acc_max, epoch_max = model.train(tot_epochs=tot_epochs, train_loader=train_loader, opt=opt)
torch.save(model.backbone.state_dict(), '{}/backbone_last.tar'.format(ckpt_dir))
return acc_max, epoch_max, model.backbone.state_dict()
================================================
FILE: ts_classification_methods/selftime_cls/optim/pytorchtools.py
================================================
# -*- coding: utf-8 -*-
import numpy as np
import torch
class EarlyStopping:
"""Early stops the training if validation loss doesn't improve after a given patience."""
def __init__(self, patience=50, verbose=False, delta=0, checkpoint_pth='chechpoint.pt'):
"""
Args:
patience (int): How long to wait after last time validation loss improved.
Default: 7
verbose (bool): If True, prints a message for each validation loss improvement.
Default: False
delta (float): Minimum change in the monitored quantity to qualify as an improvement.
Default: 0
"""
self.patience = patience
self.verbose = verbose
self.counter = 0
self.best_score = None
self.early_stop = False
self.val_loss_min = np.Inf
self.delta = delta
self.checkpoint_pth = checkpoint_pth
def __call__(self, val_loss, model):
score = val_loss
if self.best_score is None:
self.best_score = score
# self.save_checkpoint(val_loss, model, self.checkpoint_pth)
elif score <= self.best_score + self.delta:
self.counter += 1
print(f'EarlyStopping counter: {self.counter} out of {self.patience}')
if self.counter >= self.patience:
self.early_stop = True
else:
self.best_score = score
self.save_checkpoint(val_loss, model, self.checkpoint_pth)
self.counter = 0
def save_checkpoint(self, val_loss, model, checkpoint_pth):
'''Saves model when validation loss decrease.'''
if model is not None:
if self.verbose:
print(f'Validation loss decreased ({self.val_loss_min:.6f} --> {val_loss:.6f}). Saving model ...')
torch.save(model.state_dict(), checkpoint_pth)
self.val_loss_min = val_loss
================================================
FILE: ts_classification_methods/selftime_cls/optim/train.py
================================================
# -*- coding: utf-8 -*-
import torch
import utils.transforms as transforms
from dataloader.ucr2018 import UCR2018
import torch.utils.data as data
from optim.pytorchtools import EarlyStopping
from model.model_backbone import SimConv4
import utils.transforms as transforms_ts
def supervised_train(x_train, y_train, x_val, y_val, x_test, y_test, nb_class, opt):
# construct data loader
# Those are the transformations used in the paper
prob = 0.2 # Transform Probability
cutout = transforms_ts.Cutout(sigma=0.1, p=prob)
jitter = transforms_ts.Jitter(sigma=0.2, p=prob) # CIFAR10
scaling = transforms_ts.Scaling(sigma=0.4, p=prob)
magnitude_warp = transforms_ts.MagnitudeWrap(sigma=0.3, knot=4, p=prob)
time_warp = transforms_ts.TimeWarp(sigma=0.2, knot=8, p=prob)
window_slice = transforms_ts.WindowSlice(reduce_ratio=0.8, p=prob)
window_warp = transforms_ts.WindowWarp(window_ratio=0.3, scales=(0.5, 2), p=prob)
transforms_list = {'jitter': [jitter],
'cutout': [cutout],
'scaling': [scaling],
'magnitude_warp': [magnitude_warp],
'time_warp': [time_warp],
'window_slice': [window_slice],
'window_warp': [window_warp],
'G0': [jitter, magnitude_warp, window_slice],
'G1': [jitter, time_warp, window_slice],
'G2': [jitter, time_warp, window_slice, window_warp, cutout],
'none': []}
transforms_targets = list()
for name in opt.aug_type:
for item in transforms_list[name]:
transforms_targets.append(item)
train_transform = transforms_ts.Compose(transforms_targets + [transforms_ts.ToTensor()])
transform_lineval = transforms.Compose([transforms.ToTensor()])
train_set_lineval = UCR2018(data=x_train, targets=y_train, transform=train_transform)
val_set_lineval = UCR2018(data=x_val, targets=y_val, transform=transform_lineval)
test_set_lineval = UCR2018(data=x_test, targets=y_test, transform=transform_lineval)
train_loader_lineval = torch.utils.data.DataLoader(train_set_lineval, batch_size=128, shuffle=True)
val_loader_lineval = torch.utils.data.DataLoader(val_set_lineval, batch_size=128, shuffle=False)
test_loader_lineval = torch.utils.data.DataLoader(test_set_lineval, batch_size=128, shuffle=False)
# loading the saved backbone
backbone_lineval = SimConv4().cuda() # defining a raw backbone model
# 64 are the number of output features in the backbone, and 10 the number of classes
linear_layer = torch.nn.Linear(opt.feature_size, nb_class).cuda()
optimizer = torch.optim.Adam([{'params': backbone_lineval.parameters()},
{'params': linear_layer.parameters()}], lr=opt.learning_rate)
CE = torch.nn.CrossEntropyLoss()
early_stopping = EarlyStopping(opt.patience_test, verbose=True,
checkpoint_pth='{}/backbone_best.tar'.format(opt.ckpt_dir))
torch.save(backbone_lineval.state_dict(), '{}/backbone_init.tar'.format(opt.ckpt_dir))
best_acc = 0
best_epoch = 0
print('Supervised Train')
for epoch in range(opt.epochs_test):
backbone_lineval.train()
linear_layer.train()
acc_trains = list()
for i, (data, target) in enumerate(train_loader_lineval):
optimizer.zero_grad()
data = data.cuda()
target = target.cuda()
output = backbone_lineval(data)
output = linear_layer(output)
loss = CE(output, target)
loss.backward()
optimizer.step()
# estimate the accuracy
prediction = output.argmax(-1)
correct = prediction.eq(target.view_as(prediction)).sum()
accuracy = (100.0 * correct / len(target))
acc_trains.append(accuracy.item())
print('[Train-{}][{}] loss: {:.5f}; \t Acc: {:.2f}%' \
.format(epoch + 1, opt.model_name, loss.item(), sum(acc_trains) / len(acc_trains)))
acc_vals = list()
acc_tests = list()
backbone_lineval.eval()
linear_layer.eval()
with torch.no_grad():
for i, (data, target) in enumerate(val_loader_lineval):
data = data.cuda()
target = target.cuda()
output = backbone_lineval(data).detach()
output = linear_layer(output)
# estimate the accuracy
prediction = output.argmax(-1)
correct = prediction.eq(target.view_as(prediction)).sum()
accuracy = (100.0 * correct / len(target))
acc_vals.append(accuracy.item())
val_acc = sum(acc_vals) / len(acc_vals)
if val_acc >= best_acc:
best_acc = val_acc
best_epoch = epoch
for i, (data, target) in enumerate(test_loader_lineval):
data = data.cuda()
target = target.cuda()
output = backbone_lineval(data).detach()
output = linear_layer(output)
# estimate the accuracy
prediction = output.argmax(-1)
correct = prediction.eq(target.view_as(prediction)).sum()
accuracy = (100.0 * correct / len(target))
acc_tests.append(accuracy.item())
test_acc = sum(acc_tests) / len(acc_tests)
print('[Test-{}] Val ACC:{:.2f}%, Best Test ACC.: {:.2f}% in Epoch {}'.format(
epoch, val_acc, test_acc, best_epoch))
early_stopping(val_acc, backbone_lineval)
if early_stopping.early_stop:
print("Early stopping")
break
torch.save(backbone_lineval.state_dict(), '{}/backbone_last.tar'.format(opt.ckpt_dir))
return test_acc, best_epoch
================================================
FILE: ts_classification_methods/selftime_cls/scripts/ucr.sh
================================================
python -u train_ssl.py --dataset_name Herring --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ProximalPhalanxOutlineAgeGroup --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name UWaveGestureLibraryAll --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name FiftyWords --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name PLAID --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name SmoothSubspace --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Lightning7 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Mallat --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name FordB --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name FaceFour --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Fungi --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name EOGHorizontalSignal --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ChlorineConcentration --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ECGFiveDays --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Computers --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name InsectEPGRegularTrain --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name DodgerLoopDay --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Wafer --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name FaceAll --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name MiddlePhalanxOutlineAgeGroup --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Phoneme --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name SonyAIBORobotSurface2 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name DistalPhalanxOutlineAgeGroup --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name GunPointOldVersusYoung --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name CBF --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Haptics --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name CinCECGTorso --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ECG5000 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name MedicalImages --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ShakeGestureWiimoteZ --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Rock --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name SmallKitchenAppliances --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name BeetleFly --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name WordSynonyms --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ItalyPowerDemand --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name LargeKitchenAppliances --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Yoga --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name HouseTwenty --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name FordA --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Meat --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ToeSegmentation2 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name GestureMidAirD2 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name NonInvasiveFetalECGThorax1 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Adiac --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name UWaveGestureLibraryY --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Crop --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name BirdChicken --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name AllGestureWiimoteY --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ElectricDevices --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ShapeletSim --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name UWaveGestureLibraryX --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name CricketZ --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name OSULeaf --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name DistalPhalanxOutlineCorrect --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name FreezerRegularTrain --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Ham --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name AllGestureWiimoteX --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name MixedShapesRegularTrain --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name MiddlePhalanxOutlineCorrect --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Plane --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name PigArtPressure --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name SyntheticControl --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Fish --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name MelbournePedestrian --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ShapesAll --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name InsectEPGSmallTrain --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Symbols --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name PhalangesOutlinesCorrect --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name AllGestureWiimoteZ --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name CricketY --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name MixedShapesSmallTrain --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name TwoLeadECG --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Chinatown --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ToeSegmentation1 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name HandOutlines --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Worms --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name SemgHandSubjectCh2 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Wine --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ACSF1 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ProximalPhalanxOutlineCorrect --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name DodgerLoopWeekend --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name DodgerLoopGame --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name DistalPhalanxTW --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name GesturePebbleZ2 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name PigAirwayPressure --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Beef --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Strawberry --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name BME --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name MoteStrain --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name MiddlePhalanxTW --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name FacesUCR --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name GunPoint --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name PickupGestureWiimoteZ --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name GestureMidAirD1 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ProximalPhalanxTW --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name SwedishLeaf --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Lightning2 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name OliveOil --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Coffee --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name SemgHandGenderCh2 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Car --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name InsectWingbeatSound --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name GunPointMaleVersusFemale --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name RefrigerationDevices --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ArrowHead --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name InlineSkate --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name DiatomSizeReduction --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name GunPointAgeSpan --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name FreezerSmallTrain --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name WormsTwoClass --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name GesturePebbleZ1 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name EthanolLevel --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ScreenType --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name TwoPatterns --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name PowerCons --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name StarLightCurves --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Earthquakes --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name UWaveGestureLibraryZ --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name SemgHandMovementCh2 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name SonyAIBORobotSurface1 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name ECG200 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name GestureMidAirD3 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name PigCVP --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name EOGVerticalSignal --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name UMD --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name NonInvasiveFetalECGThorax2 --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name CricketX --model_name SelfTime --random_seed 42
python -u train_ssl.py --dataset_name Trace --model_name SelfTime --random_seed 42
================================================
FILE: ts_classification_methods/selftime_cls/train_ssl.py
================================================
# -*- coding: utf-8 -*-
import numpy as np
from evaluation.eval_ssl import evaluation
from utils.utils import get_config_from_json
import torch
import argparse
from optim.pretrain import *
import datetime
import random
from data.preprocessing import *
import os
import sys
sys.path.append('..')
def parse_option():
parser = argparse.ArgumentParser('argument for training')
parser.add_argument('--save_freq', type=int, default=200,
help='save frequency')
parser.add_argument('--batch_size', type=int, default=128,
help='batch_size')
# Bigger is better.
parser.add_argument('--K', type=int, default=16,
help='Number of augmentation for each sample')
parser.add_argument('--feature_size', type=int, default=64,
help='feature_size')
parser.add_argument('--num_workers', type=int, default=16,
help='num of workers to use')
parser.add_argument('--epochs', type=int, default=400, # 400
help='number of training epochs')
parser.add_argument('--patience', type=int, default=400,
help='training patience')
parser.add_argument('--aug_type', type=str,
default='none', help='Augmentation type')
parser.add_argument('--piece_size', type=float, default=0.2,
help='piece size for time series piece sampling')
parser.add_argument('--class_type', type=str,
default='3C', help='Classification type')
# optimization
parser.add_argument('--learning_rate', type=float, default=0.01,
help='learning rate')
# model dataset
parser.add_argument('--dataset_name', type=str, default='CricketX',
help='dataset')
parser.add_argument('--ucr_path', type=str, default='/dev_data/zzj/hzy/datasets/UCR',
help='Data root for dataset.')
parser.add_argument('--ckpt_dir', type=str, default='./ckpt/',
help='Data path for checkpoint.')
# method
parser.add_argument('--backbone', type=str, default='SimConv4')
parser.add_argument('--model_name', type=str, default='InterSample',
choices=['InterSample', 'IntraTemporal', 'SelfTime'], help='choose method')
parser.add_argument('--config_dir', type=str,
default='./config', help='The Configuration Dir')
parser.add_argument('--gpus', type=str, default='0', help='selected gpu')
parser.add_argument('--random_seed', type=int,
default=42, help='for reproduction purpose')
opt = parser.parse_args()
return opt
if __name__ == "__main__":
opt = parse_option()
exp = 'linear_eval'
os.environ['CUDA_VISIBLE_DEVICES'] = opt.gpus
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# set seed
random.seed(opt.random_seed)
np.random.seed(opt.random_seed)
torch.manual_seed(opt.random_seed)
torch.cuda.manual_seed(opt.random_seed)
torch.cuda.manual_seed_all(opt.random_seed)
aug1 = ['magnitude_warp']
aug2 = ['time_warp']
# use uwave config
config_dict = get_config_from_json('{}/{}_config.json'.format(
opt.config_dir, 'UWaveGestureLibraryAll'))
opt.class_type = config_dict['class_type']
opt.piece_size = config_dict['piece_size']
if opt.model_name == 'InterSample':
model_paras = 'none'
else:
model_paras = '{}_{}'.format(opt.piece_size, opt.class_type)
if aug1 == aug2:
opt.aug_type = [aug1]
elif type(aug1) is list:
opt.aug_type = aug1 + aug2
else:
opt.aug_type = [aug1, aug2]
log_dir = './log/{}/{}/{}/{}/{}'.format(
exp, opt.dataset_name, opt.model_name, '_'.join(opt.aug_type), model_paras)
if not os.path.exists(log_dir):
os.makedirs(log_dir)
file2print_detail_train = open("{}/train_detail.log".format(log_dir), 'a+')
print(datetime.datetime.now(), file=file2print_detail_train)
print("Dataset\tTrain\tTest\tDimension\tClass\tSeed\tAcc_max\tEpoch_max",
file=file2print_detail_train)
file2print_detail_train.flush()
sum_dataset, sum_target, nb_class = load_data(
opt.ucr_path, opt.dataset_name)
sum_dataset = np.expand_dims(sum_dataset, 2)
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = k_fold(
sum_dataset, sum_target)
accu = []
if not os.path.exists(opt.ckpt_dir):
os.makedirs(opt.ckpt_dir)
print('[INFO] Running at:', opt.dataset_name)
save_path = './ucr_result.csv'
for i, x_train in enumerate(train_datasets):
print('{} fold start training!'.format(i))
y_train = train_targets[i]
x_val = val_datasets[i]
y_val = val_targets[i]
x_test = test_datasets[i]
y_test = test_targets[i]
x_train, x_val, x_test = fill_nan_value(x_train, x_val, x_test)
x_train, x_val, x_test = normalize_per_series(
x_train), normalize_per_series(x_val), normalize_per_series(x_test)
if opt.model_name == 'InterSample':
acc_max, epoch_max = pretrain_InterSampleRel(x_train, y_train, opt)
elif 'IntraTemporal' in opt.model_name:
acc_max, epoch_max = pretrain_IntraSampleRel(x_train, y_train, opt)
elif 'SelfTime' in opt.model_name:
acc_max, epoch_max, model_state_dict = pretrain_SelfTime(
x_train, y_train, opt)
acc_test, epoch_max_point = evaluation(x_train, y_train, x_val, y_val, x_test, y_test,
nb_class=nb_class, ckpt=None, opt=opt, ckpt_tosave=None, my_state=model_state_dict)
accu.append(acc_test)
accu = np.array(accu)
acc_mean = np.mean(accu)
acc_std = np.std(accu)
if os.path.exists(save_path):
result_form = pd.read_csv(save_path)
else:
result_form = pd.DataFrame(columns=['target', 'accuracy', 'std'])
result_form = result_form.append(
{'target': opt.dataset_name, 'accuracy': '%.4f' % acc_mean, 'std': '%.4f' % acc_std}, ignore_index=True)
result_form = result_form.iloc[:, -3:]
result_form.to_csv(save_path)
================================================
FILE: ts_classification_methods/selftime_cls/utils/__init__.py
================================================
# -*- coding: utf-8 -*-
================================================
FILE: ts_classification_methods/selftime_cls/utils/augmentation.py
================================================
import numpy as np
from tqdm import tqdm
import utils.helper as hlp
def slidewindow(ts, horizon=.2, stride=0.2):
xf = []
yf = []
for i in range(0, ts.shape[0], int(stride * ts.shape[0])):
horizon1 = int(horizon * ts.shape[0])
if (i + horizon1 + horizon1 <= ts.shape[0]):
xf.append(ts[i:i + horizon1,0])
yf.append(ts[i + horizon1:i + horizon1 + horizon1, 0])
xf = np.asarray(xf)
yf = np.asarray(yf)
return xf, yf
def cutout(ts, perc=.1):
seq_len = ts.shape[0]
new_ts = ts.copy()
win_len = int(perc * seq_len)
start = np.random.randint(0, seq_len-win_len-1)
end = start + win_len
start = max(0, start)
end = min(end, seq_len)
# print("[INFO] start={}, end={}".format(start, end))
new_ts[start:end, ...] = 0
# return new_ts, ts[start:end, ...]
return new_ts
def cut_piece2C(ts, perc=.1):
seq_len = ts.shape[0]
win_class = seq_len/(2*2)
if perc<1:
win_len = int(perc * seq_len)
else:
win_len = perc
start1 = np.random.randint(0, seq_len-win_len)
end1 = start1 + win_len
start2 = np.random.randint(0, seq_len - win_len)
end2 = start2 + win_len
if abs(start1-start2)<(win_class):
label=0
else:
label=1
return ts[start1:end1, ...], ts[start2:end2, ...], label
def cut_piece3C(ts, perc=.1):
seq_len = ts.shape[0]
win_class = seq_len/(2*3)
if perc<1:
win_len = int(perc * seq_len)
else:
win_len = perc
start1 = np.random.randint(0, seq_len-win_len)
end1 = start1 + win_len
start2 = np.random.randint(0, seq_len - win_len)
end2 = start2 + win_len
if abs(start1-start2)<(win_class):
label=0
elif abs(start1-start2)<(2*win_class):
label=1
else:
label=2
return ts[start1:end1, ...], ts[start2:end2, ...], label
def cut_piece4C(ts, perc=.1):
seq_len = ts.shape[0]
win_class = seq_len / (2 * 4)
if perc < 1:
win_len = int(perc * seq_len)
else:
win_len = perc
start1 = np.random.randint(0, seq_len - win_len)
end1 = start1 + win_len
start2 = np.random.randint(0, seq_len - win_len)
end2 = start2 + win_len
if abs(start1 - start2) < (win_class):
label = 0
elif abs(start1 - start2) < (2 * win_class):
label = 1
elif abs(start1 - start2) < (3 * win_class):
label = 2
else:
label = 3
return ts[start1:end1, ...], ts[start2:end2, ...], label
def cut_piece5C(ts, perc=.1):
seq_len = ts.shape[0]
win_class = seq_len / (2 * 5)
if perc < 1:
win_len = int(perc * seq_len)
else:
win_len = perc
start1 = np.random.randint(0, seq_len - win_len)
end1 = start1 + win_len
start2 = np.random.randint(0, seq_len - win_len)
end2 = start2 + win_len
if abs(start1 - start2) < (win_class):
label = 0
elif abs(start1 - start2) < (2 * win_class):
label = 1
elif abs(start1 - start2) < (3 * win_class):
label = 2
elif abs(start1 - start2) < (4 * win_class):
label = 3
else:
label = 4
return ts[start1:end1, ...], ts[start2:end2, ...], label
def cut_piece6C(ts, perc=.1):
seq_len = ts.shape[0]
win_class = seq_len / (2 * 6)
if perc < 1:
win_len = int(perc * seq_len)
else:
win_len = perc
start1 = np.random.randint(0, seq_len - win_len)
end1 = start1 + win_len
start2 = np.random.randint(0, seq_len - win_len)
end2 = start2 + win_len
if abs(start1 - start2) < (win_class):
label = 0
elif abs(start1 - start2) < (2 * win_class):
label = 1
elif abs(start1 - start2) < (3 * win_class):
label = 2
elif abs(start1 - start2) < (4 * win_class):
label = 3
elif abs(start1 - start2) < (5 * win_class):
label = 4
else:
label = 5
return ts[start1:end1, ...], ts[start2:end2, ...], label
def cut_piece7C(ts, perc=.1):
seq_len = ts.shape[0]
win_class = seq_len / (2 * 7)
if perc < 1:
win_len = int(perc * seq_len)
else:
win_len = perc
start1 = np.random.randint(0, seq_len - win_len)
end1 = start1 + win_len
start2 = np.random.randint(0, seq_len - win_len)
end2 = start2 + win_len
if abs(start1 - start2) < (win_class):
label = 0
elif abs(start1 - start2) < (2 * win_class):
label = 1
elif abs(start1 - start2) < (3 * win_class):
label = 2
elif abs(start1 - start2) < (4 * win_class):
label = 3
elif abs(start1 - start2) < (5 * win_class):
label = 4
elif abs(start1 - start2) < (6 * win_class):
label = 5
else:
label = 6
return ts[start1:end1, ...], ts[start2:end2, ...], label
def cut_piece8C(ts, perc=.1):
seq_len = ts.shape[0]
win_class = seq_len / (2 * 8)
if perc < 1:
win_len = int(perc * seq_len)
else:
win_len = perc
start1 = np.random.randint(0, seq_len - win_len)
end1 = start1 + win_len
start2 = np.random.randint(0, seq_len - win_len)
end2 = start2 + win_len
if abs(start1 - start2) < (win_class):
label = 0
elif abs(start1 - start2) < (2 * win_class):
label = 1
elif abs(start1 - start2) < (3 * win_class):
label = 2
elif abs(start1 - start2) < (4 * win_class):
label = 3
elif abs(start1 - start2) < (5 * win_class):
label = 4
elif abs(start1 - start2) < (6 * win_class):
label = 5
elif abs(start1 - start2) < (7 * win_class):
label = 6
else:
label = 7
return ts[start1:end1, ...], ts[start2:end2, ...], label
def jitter(x, sigma=0.03):
# https://arxiv.org/pdf/1706.00527.pdf
return x + np.random.normal(loc=0., scale=sigma, size=x.shape)
def scaling(x, sigma=0.1):
# https://arxiv.org/pdf/1706.00527.pdf
factor = np.random.normal(loc=1., scale=sigma, size=(x.shape[0],x.shape[2]))
return np.multiply(x, factor[:,np.newaxis,:])
def rotation(x):
flip = np.random.choice([-1, 1], size=(x.shape[0],x.shape[2]))
rotate_axis = np.arange(x.shape[2])
np.random.shuffle(rotate_axis)
return flip[:,np.newaxis,:] * x[:,:,rotate_axis]
def scaling_s(x, sigma=0.1, plot=False):
# https://arxiv.org/pdf/1706.00527.pdf
factor = np.random.normal(loc=1., scale=sigma, size=(1, x.shape[1]))
x_ = np.multiply(x, factor[:, :])
if plot:
hlp.plot1d(x, x_, save_file='aug_examples/scal.png')
return x_
def rotation_s(x, plot=False):
flip = np.random.choice([-1], size=(1, x.shape[1]))
rotate_axis = np.arange(x.shape[1])
np.random.shuffle(rotate_axis)
x_ = flip[:, :] * x[:, rotate_axis]
if plot:
hlp.plot1d(x, x_, save_file='aug_examples/rotation_s.png')
return x_
def rotation2d(x, sigma=0.2):
thetas = np.random.normal(loc=0, scale=sigma, size=(x.shape[0]))
c = np.cos(thetas)
s = np.sin(thetas)
ret = np.zeros_like(x)
for i, pat in enumerate(x):
rot = np.array(((c[i], -s[i]), (s[i], c[i])))
ret[i] = np.dot(pat, rot)
return ret
def permutation(x, max_segments=5, seg_mode="equal"):
orig_steps = np.arange(x.shape[1])
num_segs = np.random.randint(1, max_segments, size=(x.shape[0]))
ret = np.zeros_like(x)
for i, pat in enumerate(x):
if num_segs[i] > 1:
if seg_mode == "random":
split_points = np.random.choice(x.shape[1]-2, num_segs[i]-1, replace=False)
split_points.sort()
splits = np.split(orig_steps, split_points)
else:
splits = np.array_split(orig_steps, num_segs[i])
warp = np.concatenate(np.random.permutation(splits)).ravel()
ret[i] = pat[warp]
else:
ret[i] = pat
return ret
def magnitude_warp(x, sigma=0.2, knot=4):
from scipy.interpolate import CubicSpline
orig_steps = np.arange(x.shape[1])
random_warps = np.random.normal(loc=1.0, scale=sigma, size=(x.shape[0], knot+2, x.shape[2]))
warp_steps = (np.ones((x.shape[2],1))*(np.linspace(0, x.shape[1]-1., num=knot+2))).T
ret = np.zeros_like(x)
for i, pat in enumerate(x):
li = []
for dim in range(x.shape[2]):
li.append(CubicSpline(warp_steps[:, dim], random_warps[i, :, dim])(orig_steps))
warper = np.array(li).T
ret[i] = pat * warper
return ret
def magnitude_warp_s(x, sigma=0.2, knot=4, plot=False):
from scipy.interpolate import CubicSpline
orig_steps = np.arange(x.shape[0])
random_warps = np.random.normal(loc=1.0, scale=sigma, size=(1, knot + 2, x.shape[1]))
warp_steps = (np.ones((x.shape[1], 1)) * (np.linspace(0, x.shape[0] - 1., num=knot + 2))).T
li = []
for dim in range(x.shape[1]):
li.append(CubicSpline(warp_steps[:, dim], random_warps[0, :, dim])(orig_steps))
warper = np.array(li).T
x_ = x * warper
if plot:
hlp.plot1d(x, x_, save_file='aug_examples/magnitude_warp_s.png')
return x_
def time_warp(x, sigma=0.2, knot=4):
from scipy.interpolate import CubicSpline
orig_steps = np.arange(x.shape[1])
random_warps = np.random.normal(loc=1.0, scale=sigma, size=(x.shape[0], knot+2, x.shape[2]))
warp_steps = (np.ones((x.shape[2],1))*(np.linspace(0, x.shape[1]-1., num=knot+2))).T
ret = np.zeros_like(x)
for i, pat in enumerate(x):
for dim in range(x.shape[2]):
time_warp = CubicSpline(warp_steps[:,dim], warp_steps[:,dim] * random_warps[i,:,dim])(orig_steps)
scale = (x.shape[1]-1)/time_warp[-1]
ret[i,:,dim] = np.interp(orig_steps, np.clip(scale*time_warp, 0, x.shape[1]-1), pat[:,dim]).T
return ret
def time_warp_s(x, sigma=0.2, knot=4, plot=False):
from scipy.interpolate import CubicSpline
orig_steps = np.arange(x.shape[0])
random_warps = np.random.normal(loc=1.0, scale=sigma, size=(1, knot + 2, x.shape[1]))
warp_steps = (np.ones((x.shape[1], 1)) * (np.linspace(0, x.shape[0] - 1., num=knot + 2))).T
ret = np.zeros_like(x)
for dim in range(x.shape[1]):
time_warp = CubicSpline(warp_steps[:, dim],
warp_steps[:, dim] * random_warps[0, :, dim])(orig_steps)
scale = (x.shape[0] - 1) / time_warp[-1]
ret[:, dim] = np.interp(orig_steps, np.clip(scale * time_warp, 0, x.shape[0] - 1),
x[:, dim]).T
if plot:
hlp.plot1d(x, ret, save_file='aug_examples/time_warp_s.png')
return ret
def window_slice(x, reduce_ratio=0.9):
# https://halshs.archives-ouvertes.fr/halshs-01357973/document
target_len = np.ceil(reduce_ratio*x.shape[1]).astype(int)
if target_len >= x.shape[1]:
return x
starts = np.random.randint(low=0, high=x.shape[1]-target_len, size=(x.shape[0])).astype(int)
ends = (target_len + starts).astype(int)
ret = np.zeros_like(x)
for i, pat in enumerate(x):
for dim in range(x.shape[2]):
ret[i,:,dim] = np.interp(np.linspace(0, target_len, num=x.shape[1]), np.arange(target_len), pat[starts[i]:ends[i],dim]).T
return ret
def window_slice_s(x, reduce_ratio=0.9):
# https://halshs.archives-ouvertes.fr/halshs-01357973/document
target_len = np.ceil(reduce_ratio * x.shape[0]).astype(int)
if target_len >= x.shape[0]:
return x
starts = np.random.randint(low=0, high=x.shape[0] - target_len, size=(1)).astype(int)
ends = (target_len + starts).astype(int)
ret = np.zeros_like(x)
for dim in range(x.shape[1]):
ret[:, dim] = np.interp(np.linspace(0, target_len, num=x.shape[0]), np.arange(target_len),
x[starts[0]:ends[0], dim]).T
return ret
def window_warp(x, window_ratio=0.1, scales=[0.5, 2.]):
# https://halshs.archives-ouvertes.fr/halshs-01357973/document
warp_scales = np.random.choice(scales, x.shape[0])
warp_size = np.ceil(window_ratio*x.shape[1]).astype(int)
window_steps = np.arange(warp_size)
window_starts = np.random.randint(low=1, high=x.shape[1]-warp_size-1, size=(x.shape[0])).astype(int)
window_ends = (window_starts + warp_size).astype(int)
ret = np.zeros_like(x)
for i, pat in enumerate(x):
for dim in range(x.shape[2]):
start_seg = pat[:window_starts[i],dim]
window_seg = np.interp(np.linspace(0, warp_size-1, num=int(warp_size*warp_scales[i])), window_steps, pat[window_starts[i]:window_ends[i],dim])
end_seg = pat[window_ends[i]:,dim]
warped = np.concatenate((start_seg, window_seg, end_seg))
ret[i,:,dim] = np.interp(np.arange(x.shape[1]), np.linspace(0, x.shape[1]-1., num=warped.size), warped).T
return ret
def window_warp_s(x, window_ratio=0.1, scales=[0.5, 2.]):
# https://halshs.archives-ouvertes.fr/halshs-01357973/document
warp_scales = np.random.choice(scales, 1)
warp_size = np.ceil(window_ratio * x.shape[0]).astype(int)
window_steps = np.arange(warp_size)
window_starts = np.random.randint(low=1, high=x.shape[0] - warp_size - 1, size=(1)).astype(int)
window_ends = (window_starts + warp_size).astype(int)
ret = np.zeros_like(x)
pat=x
for dim in range(x.shape[1]):
start_seg = pat[:window_starts[0], dim]
window_seg = np.interp(np.linspace(0, warp_size - 1,
num=int(warp_size * warp_scales[0])), window_steps,
pat[window_starts[0]:window_ends[0], dim])
end_seg = pat[window_ends[0]:, dim]
warped = np.concatenate((start_seg, window_seg, end_seg))
ret[:, dim] = np.interp(np.arange(x.shape[0]), np.linspace(0, x.shape[0] - 1., num=warped.size),
warped).T
return ret
def spawner(x, labels, sigma=0.05, verbose=0):
# https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6983028/
import utils.dtw as dtw
random_points = np.random.randint(low=1, high=x.shape[1]-1, size=x.shape[0])
window = np.ceil(x.shape[1] / 10.).astype(int)
orig_steps = np.arange(x.shape[1])
l = np.argmax(labels, axis=1) if labels.ndim > 1 else labels
ret = np.zeros_like(x)
for i, pat in enumerate(tqdm(x)):
# guarentees that same one isnt selected
choices = np.delete(np.arange(x.shape[0]), i)
# remove ones of different classes
choices = np.where(l[choices] == l[i])[0]
if choices.size > 0:
random_sample = x[np.random.choice(choices)]
# SPAWNER splits the path into two randomly
path1 = dtw.dtw(pat[:random_points[i]], random_sample[:random_points[i]], dtw.RETURN_PATH, slope_constraint="symmetric", window=window)
path2 = dtw.dtw(pat[random_points[i]:], random_sample[random_points[i]:], dtw.RETURN_PATH, slope_constraint="symmetric", window=window)
combined = np.concatenate((np.vstack(path1), np.vstack(path2+random_points[i])), axis=1)
if verbose:
print(random_points[i])
dtw_value, cost, DTW_map, path = dtw.dtw(pat, random_sample,
return_flag = dtw.RETURN_ALL,
slope_constraint=slope_constraint,
window=window)
dtw.draw_graph1d(cost, DTW_map, path, pat, random_sample)
dtw.draw_graph1d(cost, DTW_map, combined, pat, random_sample)
mean = np.mean([pat[combined[0]], random_sample[combined[1]]], axis=0)
for dim in range(x.shape[2]):
ret[i,:,dim] = np.interp(orig_steps, np.linspace(0, x.shape[1]-1., num=mean.shape[0]), mean[:,dim]).T
else:
print("There is only one pattern of class %d, skipping pattern average"%l[i])
ret[i,:] = pat
return jitter(ret, sigma=sigma)
def wdba(x, labels, batch_size=6, slope_constraint="symmetric", use_window=True):
# https://ieeexplore.ieee.org/document/8215569
import utils.dtw as dtw
if use_window:
window = np.ceil(x.shape[1] / 10.).astype(int)
else:
window = None
orig_steps = np.arange(x.shape[1])
l = np.argmax(labels, axis=1) if labels.ndim > 1 else labels
ret = np.zeros_like(x)
for i in tqdm(range(ret.shape[0])):
# get the same class as i
choices = np.where(l == l[i])[0]
if choices.size > 0:
# pick random intra-class pattern
k = min(choices.size, batch_size)
random_prototypes = x[np.random.choice(choices, k, replace=False)]
# calculate dtw between all
dtw_matrix = np.zeros((k, k))
for p, prototype in enumerate(random_prototypes):
for s, sample in enumerate(random_prototypes):
if p == s:
dtw_matrix[p, s] = 0.
else:
dtw_matrix[p, s] = dtw.dtw(prototype, sample, dtw.RETURN_VALUE, slope_constraint=slope_constraint, window=window)
# get medoid
medoid_id = np.argsort(np.sum(dtw_matrix, axis=1))[0]
nearest_order = np.argsort(dtw_matrix[medoid_id])
medoid_pattern = random_prototypes[medoid_id]
# start weighted DBA
average_pattern = np.zeros_like(medoid_pattern)
weighted_sums = np.zeros((medoid_pattern.shape[0]))
for nid in nearest_order:
if nid == medoid_id or dtw_matrix[medoid_id, nearest_order[1]] == 0.:
average_pattern += medoid_pattern
weighted_sums += np.ones_like(weighted_sums)
else:
path = dtw.dtw(medoid_pattern, random_prototypes[nid], dtw.RETURN_PATH, slope_constraint=slope_constraint, window=window)
dtw_value = dtw_matrix[medoid_id, nid]
warped = random_prototypes[nid, path[1]]
weight = np.exp(np.log(0.5)*dtw_value/dtw_matrix[medoid_id, nearest_order[1]])
average_pattern[path[0]] += weight * warped
weighted_sums[path[0]] += weight
ret[i,:] = average_pattern / weighted_sums[:,np.newaxis]
else:
print("There is only one pattern of class %d, skipping pattern average"%l[i])
ret[i,:] = x[i]
return ret
# Proposed
def random_guided_warp(x, labels, slope_constraint="symmetric", use_window=True, dtw_type="normal"):
import utils.dtw as dtw
if use_window:
window = np.ceil(x.shape[1] / 10.).astype(int)
else:
window = None
orig_steps = np.arange(x.shape[1])
l = np.argmax(labels, axis=1) if labels.ndim > 1 else labels
ret = np.zeros_like(x)
for i, pat in enumerate(tqdm(x)):
# guarentees that same one isnt selected
choices = np.delete(np.arange(x.shape[0]), i)
# remove ones of different classes
choices = np.where(l[choices] == l[i])[0]
if choices.size > 0:
# pick random intra-class pattern
random_prototype = x[np.random.choice(choices)]
if dtw_type == "shape":
path = dtw.shape_dtw(random_prototype, pat, dtw.RETURN_PATH, slope_constraint=slope_constraint, window=window)
else:
path = dtw.dtw(random_prototype, pat, dtw.RETURN_PATH, slope_constraint=slope_constraint, window=window)
# Time warp
warped = pat[path[1]]
for dim in range(x.shape[2]):
ret[i,:,dim] = np.interp(orig_steps, np.linspace(0, x.shape[1]-1., num=warped.shape[0]), warped[:,dim]).T
else:
print("There is only one pattern of class %d, skipping timewarping"%l[i])
ret[i,:] = pat
return ret
def discriminative_guided_warp(x, labels, batch_size=6, slope_constraint="symmetric", use_window=True, dtw_type="normal", use_variable_slice=True):
import utils.dtw as dtw
if use_window:
window = np.ceil(x.shape[1] / 10.).astype(int)
else:
window = None
orig_steps = np.arange(x.shape[1])
l = np.argmax(labels, axis=1) if labels.ndim > 1 else labels
positive_batch = np.ceil(batch_size / 2).astype(int)
negative_batch = np.floor(batch_size / 2).astype(int)
ret = np.zeros_like(x)
warp_amount = np.zeros(x.shape[0])
for i, pat in enumerate(tqdm(x)):
# guarentees that same one isnt selected
choices = np.delete(np.arange(x.shape[0]), i)
# remove ones of different classes
positive = np.where(l[choices] == l[i])[0]
negative = np.where(l[choices] != l[i])[0]
if positive.size > 0 and negative.size > 0:
pos_k = min(positive.size, positive_batch)
neg_k = min(negative.size, negative_batch)
positive_prototypes = x[np.random.choice(positive, pos_k, replace=False)]
negative_prototypes = x[np.random.choice(negative, neg_k, replace=False)]
# vector embedding and nearest prototype in one
pos_aves = np.zeros((pos_k))
neg_aves = np.zeros((pos_k))
if dtw_type == "shape":
for p, pos_prot in enumerate(positive_prototypes):
for ps, pos_samp in enumerate(positive_prototypes):
if p != ps:
pos_aves[p] += (1./(pos_k-1.))*dtw.shape_dtw(pos_prot, pos_samp, dtw.RETURN_VALUE, slope_constraint=slope_constraint, window=window)
for ns, neg_samp in enumerate(negative_prototypes):
neg_aves[p] += (1./neg_k)*dtw.shape_dtw(pos_prot, neg_samp, dtw.RETURN_VALUE, slope_constraint=slope_constraint, window=window)
selected_id = np.argmax(neg_aves - pos_aves)
path = dtw.shape_dtw(positive_prototypes[selected_id], pat, dtw.RETURN_PATH, slope_constraint=slope_constraint, window=window)
else:
for p, pos_prot in enumerate(positive_prototypes):
for ps, pos_samp in enumerate(positive_prototypes):
if p != ps:
pos_aves[p] += (1./(pos_k-1.))*dtw.dtw(pos_prot, pos_samp, dtw.RETURN_VALUE, slope_constraint=slope_constraint, window=window)
for ns, neg_samp in enumerate(negative_prototypes):
neg_aves[p] += (1./neg_k)*dtw.dtw(pos_prot, neg_samp, dtw.RETURN_VALUE, slope_constraint=slope_constraint, window=window)
selected_id = np.argmax(neg_aves - pos_aves)
path = dtw.dtw(positive_prototypes[selected_id], pat, dtw.RETURN_PATH, slope_constraint=slope_constraint, window=window)
# Time warp
warped = pat[path[1]]
warp_path_interp = np.interp(orig_steps, np.linspace(0, x.shape[1]-1., num=warped.shape[0]), path[1])
warp_amount[i] = np.sum(np.abs(orig_steps-warp_path_interp))
for dim in range(x.shape[2]):
ret[i,:,dim] = np.interp(orig_steps, np.linspace(0, x.shape[1]-1., num=warped.shape[0]), warped[:,dim]).T
else:
print("There is only one pattern of class %d"%l[i])
ret[i,:] = pat
warp_amount[i] = 0.
if use_variable_slice:
max_warp = np.max(warp_amount)
if max_warp == 0:
# unchanged
ret = window_slice(ret, reduce_ratio=0.95)
else:
for i, pat in enumerate(ret):
# Variable Sllicing
ret[i] = window_slice(pat[np.newaxis,:,:], reduce_ratio=0.95+0.05*warp_amount[i]/max_warp)[0]
return ret
================================================
FILE: ts_classification_methods/selftime_cls/utils/datasets.py
================================================
def nb_dims(dataset):
if dataset in ["unipen1a", "unipen1b", "unipen1c"]:
return 2
return 1
def nb_classes(dataset):
if dataset=='MFPT':
return 15
if dataset == 'XJTU':
return 15
if dataset == "CricketX":
return 12 #300
if dataset == "UWaveGestureLibraryAll":
return 8 # 945
if dataset == "DodgerLoopDay":
return 7
if dataset == "InsectWingbeatSound":
return 11
================================================
FILE: ts_classification_methods/selftime_cls/utils/helper.py
================================================
import numpy as np
def plot2d(x, y, x2=None, y2=None, x3=None, y3=None, xlim=(-1, 1), ylim=(-1, 1), save_file=""):
import matplotlib.pyplot as plt
plt.figure(figsize=(4, 4))
plt.plot(x, y)
if x2 is not None and y2 is not None:
plt.plot(x2, y2)
if x3 is not None and y3 is not None:
plt.plot(x3, y3)
plt.xlim(xlim)
plt.ylim(ylim)
plt.tight_layout()
if save_file:
plt.savefig(save_file, "")
else:
plt.show()
return
def plot1d(x, x2=None, x3=None, ylim=(-1, 1), save_file=""):
import matplotlib.pyplot as plt
plt.figure(figsize=(6, 3))
steps = np.arange(x.shape[0])
plt.plot(steps, x)
if x2 is not None:
plt.plot(steps, x2)
if x3 is not None:
plt.plot(steps, x3)
plt.xlim(0, x.shape[0])
plt.ylim(ylim)
plt.tight_layout()
if save_file:
plt.savefig(save_file)
else:
plt.show()
return
================================================
FILE: ts_classification_methods/selftime_cls/utils/transforms.py
================================================
import random
import torch
from utils.augmentation import *
class Raw:
def __init__(self):
pass
def __call__(self, data):
return data
class CutPiece2C:
def __init__(self, sigma):
self.sigma = sigma
def __call__(self, data):
return self.forward(data)
def forward(self, data):
return cut_piece2C(data, self.sigma)
class CutPiece3C:
def __init__(self, sigma):
self.sigma = sigma
def __call__(self, data):
return self.forward(data)
def forward(self, data):
return cut_piece3C(data, self.sigma)
class CutPiece4C:
def __init__(self, sigma):
self.sigma = sigma
def __call__(self, data):
return self.forward(data)
def forward(self, data):
return cut_piece4C(data, self.sigma)
class CutPiece5C:
def __init__(self, sigma):
self.sigma = sigma
def __call__(self, data):
return self.forward(data)
def forward(self, data):
return cut_piece5C(data, self.sigma)
class CutPiece6C:
def __init__(self, sigma):
self.sigma = sigma
def __call__(self, data):
return self.forward(data)
def forward(self, data):
return cut_piece6C(data, self.sigma)
class CutPiece7C:
def __init__(self, sigma):
self.sigma = sigma
def __call__(self, data):
return self.forward(data)
def forward(self, data):
return cut_piece7C(data, self.sigma)
class CutPiece8C:
def __init__(self, sigma):
self.sigma = sigma
def __call__(self, data):
return self.forward(data)
def forward(self, data):
return cut_piece8C(data, self.sigma)
class Jitter:
def __init__(self, sigma, p):
self.sigma = sigma
self.p = p
def __call__(self, data):
# print('### Jitter')
if random.random() < self.p:
return self.forward(data)
return data
def forward(self, data):
return jitter(data, sigma=self.sigma)
class Scaling:
def __init__(self, sigma, p):
self.sigma = sigma
self.p = p
def __call__(self, data):
# print('### Scaling')
if random.random() < self.p:
return self.forward(data)
return data
def forward(self, data):
return scaling_s(data, sigma=self.sigma)
class Cutout:
def __init__(self, sigma, p):
self.sigma = sigma
self.p = p
def __call__(self, data):
# print('### Cutout')
if random.random() < self.p:
return self.forward(data)
return data
def forward(self, data):
return cutout(data, self.sigma)
class MagnitudeWrap:
def __init__(self, sigma, knot, p):
self.sigma = sigma
self.knot = knot
self.p = p
def __call__(self, data):
# print('### MagnitudeWrap')
if random.random() < self.p:
return self.forward(data)
return data
def forward(self, data):
return magnitude_warp_s(data, sigma=self.sigma, knot=self.knot)
class TimeWarp:
def __init__(self, sigma, knot, p):
self.sigma = sigma
self.knot = knot
self.p = p
def __call__(self, data):
if random.random() < self.p:
return self.forward(data)
return data
def forward(self, data):
return time_warp_s(data, sigma=self.sigma, knot=self.knot)
class WindowSlice:
def __init__(self, reduce_ratio, p):
self.reduce_ratio = reduce_ratio
self.p = p
def __call__(self, data):
if random.random() < self.p:
return self.forward(data)
return data
def forward(self, data):
return window_slice_s(data, reduce_ratio=self.reduce_ratio)
class WindowWarp:
def __init__(self, window_ratio, scales, p):
self.window_ratio = window_ratio
self.scales = scales
self.p = p
def __call__(self, data):
if random.random() < self.p:
return self.forward(data)
return data
def forward(self, data):
return window_warp_s(data, window_ratio=self.window_ratio, scales=self.scales)
class ToTensor:
'''
Attributes
----------
basic : convert numpy to PyTorch tensor
Methods
-------
forward(img=input_image)
Convert HWC OpenCV image into CHW PyTorch Tensor
'''
def __init__(self, basic=False):
self.basic = basic
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def __call__(self, img):
return self.forward(img)
def forward(self, img):
'''
Parameters
----------
img : opencv/numpy image
Returns
-------
Torch tensor
BGR -> RGB, [0, 255] -> [0, 1]
'''
ret = torch.from_numpy(img).type(torch.FloatTensor).to(self.device)
return ret
class Compose:
def __init__(self, transforms):
self.transforms = transforms
def __call__(self, img):
return self.forward(img)
def forward(self, img):
for t in self.transforms:
img = t(img)
return img
================================================
FILE: ts_classification_methods/selftime_cls/utils/utils.py
================================================
# -*- coding: utf-8 -*-
import json
def get_config_from_json(json_file):
"""
Get the config from a json file
:param json_file:
:return: config(dictionary)
"""
# parse the configurations from the config json file provided
with open(json_file, 'r') as config_file:
config_dict = json.load(config_file)
return config_dict
================================================
FILE: ts_classification_methods/selftime_cls/utils/utils_plot.py
================================================
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
import numpy as np
import os
def show_samples(X_train, y_train, dataset_name, figname='', num_shown=5):
'''
:param X_train:
:param y_train:
:param shown_num:
:return:
'''
num_cls = np.max(y_train)+1
samples={}
for cls in range(num_cls):
idx = np.where(y_train==cls)[0]
# np.random.shuffle(idx)
samples[cls] = X_train[idx[:num_shown]]
plt.figure(figsize=(num_shown*3, num_cls))
for i in range(1, num_cls+1):
for j in range(1, num_shown+1):
plt.subplot(num_cls, num_shown, j+(i-1)*num_shown)
plt.plot(samples[i-1][j-1])
plt.tight_layout()
if not os.path.exists('Samples'):
os.makedirs('Samples')
plt.savefig('Samples/{}_{}.png'.format(dataset_name, figname))
plt.close()
================================================
FILE: ts_classification_methods/test/__init__.py
================================================
================================================
FILE: ts_classification_methods/test/train_uea_test.py
================================================
import argparse
import time
import numpy as np
import torch
from torch.utils.data import DataLoader
from data.dataloader import UEADataset
from data.preprocessing import fill_nan_value, normalize_train_val_test, load_UEA, \
normalize_uea_set
from tsm_utils import build_model, set_seed, build_loss, evaluate, get_all_datasets, save_cls_result
if __name__ == '__main__':
uea_all = ['ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions', 'CharacterTrajectories',
'Cricket', 'DuckDuckGeese', 'EigenWorms', 'Epilepsy', 'EthanolConcentration', 'ERing',
'FaceDetection', 'FingerMovements', 'HandMovementDirection', 'Handwriting',
'Heartbeat', 'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PenDigits', 'PEMS-SF', 'PhonemeSpectra', 'RacketSports', 'SelfRegulationSCP1',
'SelfRegulationSCP2', 'SpokenArabicDigits', 'StandWalkJump', 'UWaveGestureLibrary']
# uea_all = ['StandWalkJump', 'UWaveGestureLibrary']
for dataset in uea_all:
# dataset = 'BasicMotions'
parser = argparse.ArgumentParser()
# Base setup
parser.add_argument('--backbone', type=str, default='fcn', help='encoder backbone, fcn or dilated')
parser.add_argument('--task', type=str, default='classification', help='classification or reconstruction')
parser.add_argument('--random_seed', type=int, default=42, help='shuffle seed')
# Dataset setup
parser.add_argument('--dataset', type=str, default=dataset, help='dataset(in ucr)')
parser.add_argument('--dataroot', type=str, default='/SSD/lz/Multivariate2018_arff', help='path of UCR folder')
parser.add_argument('--num_classes', type=int, default=0, help='number of class')
parser.add_argument('--normalize_way', type=str, default='single', help='single or train_set')
parser.add_argument('--seq_len', type=int, default=46, help='seq_len')
parser.add_argument('--input_size', type=int, default=1, help='input_size')
# Dilated Convolution setup
parser.add_argument('--depth', type=int, default=3, help='depth of the dilated conv model')
parser.add_argument('--in_channels', type=int, default=1, help='input data channel')
parser.add_argument('--embedding_channels', type=int, default=40, help='mid layer channel')
parser.add_argument('--reduced_size', type=int, default=160, help='number of channels after Global max Pool')
parser.add_argument('--out_channels', type=int, default=320, help='number of channels after linear layer')
parser.add_argument('--kernel_size', type=int, default=3, help='convolution kernel size')
# training setup
parser.add_argument('--loss', type=str, default='cross_entropy', help='loss function')
parser.add_argument('--optimizer', type=str, default='adam', help='optimizer')
parser.add_argument('--lr', type=float, default=0.001, help='learning rate')
parser.add_argument('--weight_decay', type=float, default=0.0, help='weight decay')
parser.add_argument('--batch_size', type=int, default=128, help='(16, 128) larger batch size on the big dataset, ')
parser.add_argument('--epoch', type=int, default=20, help='training epoch')
parser.add_argument('--mode', type=str, default='directly_cls', help='train mode, default pretrain')
parser.add_argument('--save_dir', type=str, default='/SSD/lz/time_tsm/result_tsm')
parser.add_argument('--save_csv_name', type=str, default='ex2_test_all_uea_0530_')
parser.add_argument('--continue_training', type=int, default=0, help='continue training')
parser.add_argument('--cuda', type=str, default='cuda:0')
# Decoder setup
parser.add_argument('--decoder_backbone', type=str, default='rnn', help='backbone of the decoder (rnn or fcn)')
# classifier setup
parser.add_argument('--classifier', type=str, default='nonlinear', help='type of classifier(linear or nonlinear)')
parser.add_argument('--classifier_input', type=int, default=128, help='input dim of the classifiers')
parser.add_argument('--classifier_embedding', type=int, default=128,
help='embedding dim of the non linear classifier')
# fintune setup
parser.add_argument('--source_dataset', type=str, default=None, help='source dataset of the pretrained model')
parser.add_argument('--transfer_strategy', type=str, default='classification',
help='classification or reconstruction')
# parser.add_argument('--direct_train')
args = parser.parse_args()
device = torch.device(args.cuda if torch.cuda.is_available() else "cpu")
set_seed(args)
# sum_dataset, sum_target, num_classes = build_dataset(args)
sum_dataset, sum_target, num_classes = load_UEA(args.dataroot, args.dataset)
args.num_classes = num_classes
args.seq_len = sum_dataset.shape[1]
args.input_size = sum_dataset.shape[2]
# print("test: sum_dataset.shape = ", sum_dataset.shape)
if sum_dataset.shape[0] * 0.6 < args.batch_size:
args.batch_size = 16
model, classifier = build_model(args)
model, classifier = model.to(device), classifier.to(device)
loss = build_loss(args).to(device)
model_init_state = model.state_dict()
classifier_init_state = classifier.state_dict()
# print("model = ", model)
if args.optimizer == 'adam':
optimizer = torch.optim.Adam([{'params': model.parameters()}, {'params': classifier.parameters()}],
lr=args.lr, weight_decay=args.weight_decay)
elif args.optimizer == 'sgd':
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
if args.mode == 'directly_cls':
print('start finetune on {}'.format(args.dataset))
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
losses = []
test_accuracies = []
train_time = 0.0
end_val_epochs = []
for i, train_dataset in enumerate(train_datasets):
t = time.time()
model.load_state_dict(model_init_state)
classifier.load_state_dict(classifier_init_state)
print('{} fold start training and evaluate'.format(i))
train_target = train_targets[i]
val_dataset = val_datasets[i]
val_target = val_targets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
# print(train_dataset.shape, val_dataset.shape, test_dataset.shape)
if test_dataset.shape[0] < args.batch_size:
args.batch_size = args.batch_size // 2
if args.normalize_way == 'single':
train_dataset = normalize_uea_set(train_dataset)
val_dataset = normalize_uea_set(val_dataset)
test_dataset = normalize_uea_set(test_dataset)
else:
train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
test_dataset)
train_set = UEADataset(torch.from_numpy(train_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(train_target).type(torch.FloatTensor).to(device).to(torch.int64))
val_set = UEADataset(torch.from_numpy(val_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(val_target).type(torch.FloatTensor).to(device).to(torch.int64))
test_set = UEADataset(torch.from_numpy(test_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(test_target).type(torch.FloatTensor).to(device).to(torch.int64))
train_loader = DataLoader(train_set, batch_size=args.batch_size, num_workers=0, drop_last=True)
val_loader = DataLoader(val_set, batch_size=args.batch_size, num_workers=0)
test_loader = DataLoader(test_set, batch_size=args.batch_size, num_workers=0)
train_loss = []
train_accuracy = []
num_steps = args.epoch // args.batch_size
last_loss = float('inf')
stop_count = 0
increase_count = 0
test_accuracy = 0
min_val_loss = float('inf')
end_val_epoch = 0
num_steps = train_set.__len__() // args.batch_size
# print("test, args.batch_size = ", args.batch_size, ", num_steps = ", num_steps, train_set.__len__())
for epoch in range(args.epoch):
# early stopping in finetune
if stop_count == 50 or increase_count == 50:
print('model convergent at epoch {}, early stopping'.format(epoch))
break
epoch_train_loss = 0
epoch_train_acc = 0
model.train()
classifier.train()
for x, y in train_loader:
# print("type 2 = ", type(x))
optimizer.zero_grad()
pred = model(x)
pred = classifier(pred)
step_loss = loss(pred, y)
step_loss.backward()
optimizer.step()
epoch_train_loss += step_loss.item()
epoch_train_acc += torch.sum(torch.argmax(pred.data, axis=1) == y) / len(y)
epoch_train_loss /= num_steps
epoch_train_acc /= num_steps
model.eval()
classifier.eval()
val_loss, val_accu = evaluate(val_loader, model, classifier, loss, device)
if min_val_loss > val_loss:
min_val_loss = val_loss
end_val_epoch = epoch
test_loss, test_accuracy = evaluate(test_loader, model, classifier, loss, device)
if epoch % 100 == 0:
print(
"epoch : {}, train loss: {} , train accuracy : {}, \nval loss : {}, val accuracy : {}, \ntest loss : {}, test accuracy : {}".format(
epoch, epoch_train_loss, epoch_train_acc, val_loss, val_accu, test_loss, test_accuracy))
if abs(last_loss - val_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if val_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = val_loss
test_accuracies.append(test_accuracy)
end_val_epochs.append(end_val_epoch)
t = time.time() - t
train_time += t
print('{} fold finish training'.format(i))
test_accuracies = torch.Tensor(test_accuracies)
end_val_epochs = np.array(end_val_epochs)
save_cls_result(args, test_accu=torch.mean(test_accuracies), test_std=torch.std(test_accuracies),
train_time=train_time / 5, end_val_epoch=np.mean(end_val_epochs), seeds=args.random_seed)
print('Done!')
================================================
FILE: ts_classification_methods/test/uea_test.py
================================================
import numpy as np
import torch
from data.preprocessing import fill_nan_value, normalize_uea_set
from data.preprocessing import load_UEA
uea_all = ['ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions', 'CharacterTrajectories',
'Cricket', 'DuckDuckGeese', 'EigenWorms', 'Epilepsy', 'EthanolConcentration', 'ERing',
'FaceDetection', 'FingerMovements', 'HandMovementDirection', 'Handwriting',
'Heartbeat', 'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PenDigits', 'PEMS-SF', 'PhonemeSpectra', 'RacketSports', 'SelfRegulationSCP1',
'SelfRegulationSCP2', 'SpokenArabicDigits', 'StandWalkJump', 'UWaveGestureLibrary']
uea_all = ['FaceDetection']
dataroot = '/SSD/lz/Multivariate2018_arff'
i = 0
for dataset_name in uea_all:
sum_dataset, sum_target, num_classes = load_UEA(dataroot,
dataset_name) ## (num_size, series_length, num_dimensions)
series_length = []
for t_data in sum_dataset:
series_length, num_dimensions = t_data.shape
print(series_length, num_dimensions)
new_torch_sum = torch.tensor(sum_dataset).permute(0, 2, 1)
print("i = ", i, ", dataset_name = ", dataset_name, ", shape = ", sum_dataset.shape, new_torch_sum.shape,
num_classes)
if np.isnan(sum_dataset).any():
print("There has nan!!!!!!!!!!")
sum_dataset, _, _ = fill_nan_value(sum_dataset, sum_dataset, sum_dataset)
sum_dataset = normalize_uea_set(sum_dataset)
if np.isnan(sum_dataset).any():
print("Still has nan!!!")
else:
print("Mean imputation success!!!")
i += 1
================================================
FILE: ts_classification_methods/timesnet/__init__.py
================================================
================================================
FILE: ts_classification_methods/timesnet/main_timesnet.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import argparse
import time
import numpy as np
import torch
from torch.utils.data import DataLoader
from gpt4ts.gpt4ts_utils import load_UEA, normalize_uea_set, UEADataset, save_cls_new_result, set_seed, fill_nan_value, get_all_datasets, build_loss
from timesnet.models.TimesNet import Model
def collate_fn(data, device, max_len=None):
"""Build mini-batch tensors from a list of (X, mask) tuples. Mask input. Create
Args:
data: len(batch_size) list of tuples (X, y).
- X: torch tensor of shape (seq_length, feat_dim); variable seq_length.
- y: torch tensor of shape (num_labels,) : class indices or numerical targets
(for classification or regression, respectively). num_labels > 1 for multi-task models
max_len: global fixed sequence length. Used for architectures requiring fixed length input,
where the batch length cannot vary dynamically. Longer sequences are clipped, shorter are padded with 0s
Returns:
X: (batch_size, padded_length, feat_dim) torch tensor of masked features (input)
targets: (batch_size, padded_length, feat_dim) torch tensor of unmasked features (output)
target_masks: (batch_size, padded_length, feat_dim) boolean torch tensor
0 indicates masked values to be predicted, 1 indicates unaffected/"active" feature values
padding_masks: (batch_size, padded_length) boolean tensor, 1 means keep vector at this position, 0 means padding
"""
batch_size = len(data)
features, labels = zip(*data)
# Stack and pad features and masks (convert 2D to 3D tensors, i.e. add batch dimension)
lengths = [X.shape[0] for X in features] # original sequence length for each time series
if max_len is None:
max_len = max(lengths)
X = torch.zeros(batch_size, max_len, features[0].shape[-1]) # (batch_size, padded_length, feat_dim)
for i in range(batch_size):
end = min(lengths[i], max_len)
X[i, :end, :] = features[i][:end, :]
targets = torch.stack(labels, dim=0) # (batch_size, num_labels)
padding_masks = padding_mask(torch.tensor(lengths, dtype=torch.int16),
max_len=max_len) # (batch_size, padded_length) boolean tensor, "1" means keep
return X.to(device), targets.to(device), padding_masks.to(device)
def padding_mask(lengths, max_len=None):
"""
Used to mask padded positions: creates a (batch_size, max_len) boolean mask from a tensor of sequence lengths,
where 1 means keep element at this position (time step)
"""
batch_size = lengths.numel()
max_len = max_len or lengths.max_val() # trick works because of overloading of 'or' operator for non-boolean types
return (torch.arange(0, max_len, device=lengths.device)
.type_as(lengths)
.repeat(batch_size, 1)
.lt(lengths.unsqueeze(1)))
def evaluate_gpt4ts(args, val_loader, model, loss):
val_loss = 0
val_accu = 0
sum_len = 0
for data, target, padding_x_mask in val_loader:
'''
data, target = data.to(device), target.to(device)
target = target.to(torch.int64)
'''
with torch.no_grad():
val_pred = model(data, padding_x_mask)
val_loss += loss(val_pred, target).item()
val_accu += torch.sum(torch.argmax(val_pred.data, axis=1) == target)
sum_len += len(target)
return val_loss / sum_len, val_accu / sum_len
if __name__ == '__main__': ##
parser = argparse.ArgumentParser()
# Base setup
parser.add_argument('--backbone', type=str, default='fcn', help='encoder backbone, fcn')
parser.add_argument('--random_seed', type=int, default=42, help='shuffle seed')
# UEA, TimesNet: ['EigenWorms', 'LSST', 'StandWalkJump']
# Dataset setup
parser.add_argument('--dataset', type=str, default='StandWalkJump',
help='dataset(in ucr)') # LSST Heartbeat Images SelfRegulationSCP2
# parser.add_argument('--dataroot', type=str, default='../UCRArchive_2018', help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/dev_data/lz/time_series_pretrain/datasets/UCRArchive_2018',
# help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/SSD/lz/UCRArchive_2018', help='path of UCR folder')
parser.add_argument('--dataroot', type=str, default='/SSD/lz/Multivariate2018_arff', help='path of UEA folder')
parser.add_argument('--num_classes', type=int, default=0, help='number of class')
parser.add_argument('--normalize_way', type=str, default='single', help='single or train_set')
# parser.add_argument('--seq_len', type=int, default=46, help='seq_len')
parser.add_argument('--input_size', type=int, default=1, help='input_size')
# parser.add_argument('--patch_size', type=int, default=8, help='patch_size')
# parser.add_argument('--stride', type=int, default=8, help='stride')
parser.add_argument('--target_points', type=int, default=96, help='forecast horizon')
# Patch
parser.add_argument('--patch_len', type=int, default=8, help='patch length')
parser.add_argument('--stride', type=int, default=8, help='stride between patch')
# # RevIN
# parser.add_argument('--revin', type=int, default=1, help='reversible instance normalization')
# # Model args
# parser.add_argument('--n_layers', type=int, default=3, help='number of Transformer layers')
# parser.add_argument('--n_heads', type=int, default=16, help='number of Transformer heads')
# # parser.add_argument('--d_model', type=int, default=128, help='Transformer d_model')
# parser.add_argument('--d_ff', type=int, default=256, help='Tranformer MLP dimension')
# parser.add_argument('--dropout', type=float, default=0.2, help='Transformer dropout')
# parser.add_argument('--head_dropout', type=float, default=0, help='head dropout')
# Semi training
parser.add_argument('--labeled_ratio', type=float, default='0.1', help='0.1, 0.2, 0.4')
# basic config
parser.add_argument('--task_name', type=str, required=False, default='classification',
help='task name, options:[long_term_forecast, short_term_forecast, imputation, classification, anomaly_detection]')
parser.add_argument('--freq', type=str, default='h',
help='freq for time features encoding, options:[s:secondly, t:minutely, h:hourly, d:daily, b:business days, w:weekly, m:monthly], you can also use more detailed freq like 15min or 3h')
# forecasting task
parser.add_argument('--seq_len', type=int, default=96, help='input sequence length')
parser.add_argument('--label_len', type=int, default=48, help='start token length')
parser.add_argument('--pred_len', type=int, default=0, help='prediction sequence length')
parser.add_argument('--seasonal_patterns', type=str, default='Monthly', help='subset for M4')
parser.add_argument('--inverse', action='store_true', help='inverse output data', default=False)
# model define
parser.add_argument('--top_k', type=int, default=3, help='for TimesBlock')
parser.add_argument('--num_kernels', type=int, default=6, help='for Inception')
parser.add_argument('--enc_in', type=int, default=7, help='encoder input size')
parser.add_argument('--dec_in', type=int, default=7, help='decoder input size')
parser.add_argument('--c_out', type=int, default=7, help='output size')
parser.add_argument('--d_model', type=int, default=64, help='dimension of model') ###
parser.add_argument('--n_heads', type=int, default=8, help='num of heads')
parser.add_argument('--e_layers', type=int, default=3, help='num of encoder layers')
parser.add_argument('--d_layers', type=int, default=1, help='num of decoder layers')
parser.add_argument('--d_ff', type=int, default=64, help='dimension of fcn')
parser.add_argument('--moving_avg', type=int, default=25, help='window size of moving average')
parser.add_argument('--factor', type=int, default=1, help='attn factor')
parser.add_argument('--distil', action='store_false',
help='whether to use distilling in encoder, using this argument means not using distilling',
default=True)
parser.add_argument('--dropout', type=float, default=0.1, help='dropout')
parser.add_argument('--embed', type=str, default='timeF',
help='time features encoding, options:[timeF, fixed, learned]')
parser.add_argument('--activation', type=str, default='gelu', help='activation')
parser.add_argument('--output_attention', action='store_true', help='whether to output attention in ecoder')
# GPU
parser.add_argument('--use_gpu', type=bool, default=True, help='use gpu')
parser.add_argument('--gpu', type=int, default=1, help='gpu')
# training setup
parser.add_argument('--loss', type=str, default='cross_entropy', help='loss function')
parser.add_argument('--optimizer', type=str, default='adam', help='optimizer')
parser.add_argument('--lr', type=float, default=0.001, help='learning rate')
parser.add_argument('--weight_decay', type=float, default=0.0, help='weight decay')
parser.add_argument('--batch_size', type=int, default=8, help='')
parser.add_argument('--epoch', type=int, default=10, help='training epoch')
parser.add_argument('--cuda', type=str, default='cuda:1')
parser.add_argument('--save_dir', type=str, default='/SSD/lz/time_series_label_noise/result')
parser.add_argument('--save_csv_name', type=str, default='timesnet_UEA_supervised_0731_')
# classifier setup
parser.add_argument('--classifier', type=str, default='linear', help='type of classifier(linear or nonlinear)')
parser.add_argument('--classifier_input', type=int, default=128, help='input dim of the classifiers')
args = parser.parse_args()
device = torch.device(args.cuda if torch.cuda.is_available() else "cpu")
set_seed(args)
# sum_dataset, sum_target, num_classes = build_dataset(args)
sum_dataset, sum_target, num_classes = load_UEA(args.dataroot, args.dataset)
# args.num_classes = num_classes
# args.seq_len = sum_dataset.shape[1]
args.num_classes = num_classes
args.seq_len = sum_dataset.shape[1]
args.input_size = sum_dataset.shape[2]
args.enc_in = sum_dataset.shape[2]
# # get number of patches
# num_patch = (max(args.seq_len, args.patch_len) - args.patch_len) // args.stride + 1
# print('number of patches:', num_patch)
while sum_dataset.shape[0] * 0.6 < args.batch_size:
args.batch_size = args.batch_size // 2
print("args.batch_size = ", args.batch_size, ", sum_dataset.shape = ", sum_dataset.shape)
# get model
model = Model(configs=args)
# model = gpt4ts(max_seq_len=args.seq_len, num_classes=args.num_classes, var_len=args.input_size, patch_size=args.patch_size, stride=args.stride)
model = model.to(device)
# model, classifier = build_model(args)
# model, classifier = model.to(device), classifier.to(device)
loss = build_loss(args).to(device)
model_init_state = model.state_dict()
# classifier_init_state = classifier.state_dict()
if args.optimizer == 'adam':
optimizer = torch.optim.Adam([{'params': model.parameters()}],
lr=args.lr, weight_decay=args.weight_decay)
elif args.optimizer == 'sgd':
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
losses = []
test_accuracies = []
train_time = 0.0
end_val_epochs = []
for i, train_dataset in enumerate(train_datasets):
t = time.time()
model.load_state_dict(model_init_state)
# classifier.load_state_dict(classifier_init_state)
print('{} fold start training and evaluate'.format(i))
train_target = train_targets[i]
val_dataset = val_datasets[i]
val_target = val_targets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
if args.normalize_way == 'single':
# TODO normalize per series
train_dataset = normalize_uea_set(train_dataset)
val_dataset = normalize_uea_set(val_dataset)
test_dataset = normalize_uea_set(test_dataset)
# else:
# train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
# test_dataset)
train_set = UEADataset(torch.from_numpy(train_dataset).type(torch.FloatTensor).to(device).permute(0,2,1),
torch.from_numpy(train_target).type(torch.FloatTensor).to(device).to(torch.int64))
val_set = UEADataset(torch.from_numpy(val_dataset).type(torch.FloatTensor).to(device).permute(0,2,1),
torch.from_numpy(val_target).type(torch.FloatTensor).to(device).to(torch.int64))
test_set = UEADataset(torch.from_numpy(test_dataset).type(torch.FloatTensor).to(device).permute(0,2,1),
torch.from_numpy(test_target).type(torch.FloatTensor).to(device).to(torch.int64))
# train_set = train_set.permute(0,2,1)
train_loader = DataLoader(train_set, batch_size=args.batch_size, num_workers=0, drop_last=True, collate_fn=lambda x: collate_fn(x, device, max_len=args.seq_len))
val_loader = DataLoader(val_set, batch_size=args.batch_size, num_workers=0, collate_fn=lambda x: collate_fn(x, device, max_len=args.seq_len))
test_loader = DataLoader(test_set, batch_size=args.batch_size, num_workers=0, collate_fn=lambda x: collate_fn(x, device, max_len=args.seq_len))
train_loss = []
train_accuracy = []
num_steps = args.epoch // args.batch_size
last_loss = float('inf')
stop_count = 0
increase_count = 0
num_steps = train_set.__len__() // args.batch_size
min_val_loss = float('inf')
test_accuracy = 0
end_val_epoch = 0
for epoch in range(args.epoch):
if stop_count == 50 or increase_count == 50:
print('model convergent at epoch {}, early stopping'.format(epoch))
break
epoch_train_loss = 0
epoch_train_acc = 0
num_iterations = 0
model.train()
train_embed = []
for x, y, padding_x_mask in train_loader:
optimizer.zero_grad()
# print("raw x.shape = ", x.shape)
# xb, num_patch = create_patch(xb=x.permute(0,2,1), patch_len=args.patch_len, stride=args.stride)
# print("x padding_x_mask.shape = ", x.shape, padding_x_mask.shape, padding_x_mask[0][:10])
# print("x.shape = ", x.shape, ", padding_x_mask.shape = ", padding_x_mask.shape)
pred = model(x, padding_x_mask)
step_loss = loss(pred, y)
# step_loss.backward(retain_graph=True)
step_loss.backward()
optimizer.step()
epoch_train_loss += step_loss.item()
epoch_train_acc += torch.sum(torch.argmax(pred.data, axis=1) == y) / len(y)
num_iterations += 1
epoch_train_loss /= num_steps
epoch_train_acc /= num_steps
# train_embed = np.concatenate(train_embed)
model.eval()
val_loss, val_accu = evaluate_gpt4ts(args, val_loader, model, loss)
if min_val_loss > val_loss:
min_val_loss = val_loss
end_val_epoch = epoch
test_loss, test_accuracy = evaluate_gpt4ts(args, test_loader, model, loss)
if abs(last_loss - val_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if val_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = val_loss
if epoch % 50 == 0:
print(
"epoch : {}, train loss: {} , train accuracy : {}, \ntest_accuracy : {}".format(
epoch, epoch_train_loss, epoch_train_acc, test_accuracy))
test_accuracies.append(test_accuracy)
end_val_epochs.append(end_val_epoch)
t = time.time() - t
train_time += t
print('{} fold finish training'.format(i))
test_accuracies = torch.Tensor(test_accuracies)
print("Training end: mean_test_acc = ", round(torch.mean(test_accuracies).item(), 4),
"traning time (seconds) = ",
round(train_time, 4), ", seed = ", args.random_seed)
test_accuracies = test_accuracies.cpu().numpy()
save_cls_new_result(args, np.mean(test_accuracies), np.max(test_accuracies), np.min(test_accuracies),
np.std(test_accuracies), train_time)
print('Done!')
================================================
FILE: ts_classification_methods/timesnet/main_timesnet_ucr.py
================================================
import os
import sys
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import argparse
import time
import numpy as np
import torch
from torch.utils.data import DataLoader
from gpt4ts.gpt4ts_utils import load_UEA, normalize_uea_set, UEADataset, save_cls_new_result, set_seed, fill_nan_value, get_all_datasets, build_loss, build_dataset
from timesnet.models.TimesNet import Model
def collate_fn(data, device, max_len=None):
"""Build mini-batch tensors from a list of (X, mask) tuples. Mask input. Create
Args:
data: len(batch_size) list of tuples (X, y).
- X: torch tensor of shape (seq_length, feat_dim); variable seq_length.
- y: torch tensor of shape (num_labels,) : class indices or numerical targets
(for classification or regression, respectively). num_labels > 1 for multi-task models
max_len: global fixed sequence length. Used for architectures requiring fixed length input,
where the batch length cannot vary dynamically. Longer sequences are clipped, shorter are padded with 0s
Returns:
X: (batch_size, padded_length, feat_dim) torch tensor of masked features (input)
targets: (batch_size, padded_length, feat_dim) torch tensor of unmasked features (output)
target_masks: (batch_size, padded_length, feat_dim) boolean torch tensor
0 indicates masked values to be predicted, 1 indicates unaffected/"active" feature values
padding_masks: (batch_size, padded_length) boolean tensor, 1 means keep vector at this position, 0 means padding
"""
batch_size = len(data)
features, labels = zip(*data)
# Stack and pad features and masks (convert 2D to 3D tensors, i.e. add batch dimension)
lengths = [X.shape[0] for X in features] # original sequence length for each time series
if max_len is None:
max_len = max(lengths)
X = torch.zeros(batch_size, max_len, features[0].shape[-1]) # (batch_size, padded_length, feat_dim)
for i in range(batch_size):
end = min(lengths[i], max_len)
X[i, :end, :] = features[i][:end, :]
targets = torch.stack(labels, dim=0) # (batch_size, num_labels)
padding_masks = padding_mask(torch.tensor(lengths, dtype=torch.int16),
max_len=max_len) # (batch_size, padded_length) boolean tensor, "1" means keep
return X.to(device), targets.to(device), padding_masks.to(device)
def padding_mask(lengths, max_len=None):
"""
Used to mask padded positions: creates a (batch_size, max_len) boolean mask from a tensor of sequence lengths,
where 1 means keep element at this position (time step)
"""
batch_size = lengths.numel()
max_len = max_len or lengths.max_val() # trick works because of overloading of 'or' operator for non-boolean types
return (torch.arange(0, max_len, device=lengths.device)
.type_as(lengths)
.repeat(batch_size, 1)
.lt(lengths.unsqueeze(1)))
def evaluate_gpt4ts(args, val_loader, model, loss):
val_loss = 0
val_accu = 0
sum_len = 0
for data, target, padding_x_mask in val_loader:
'''
data, target = data.to(device), target.to(device)
target = target.to(torch.int64)
'''
with torch.no_grad():
val_pred = model(data, padding_x_mask)
val_loss += loss(val_pred, target).item()
val_accu += torch.sum(torch.argmax(val_pred.data, axis=1) == target)
sum_len += len(target)
return val_loss / sum_len, val_accu / sum_len
if __name__ == '__main__': ##
parser = argparse.ArgumentParser()
# UCR, TimesNet: ['HandOutlines', 'InlineSkate', 'StarLightCurves']
# UEA, TimesNet: ['EigenWorms', 'LSST', 'StandWalkJump']
# Base setup
parser.add_argument('--backbone', type=str, default='fcn', help='encoder backbone, fcn')
parser.add_argument('--random_seed', type=int, default=42, help='shuffle seed')
# Dataset setup
parser.add_argument('--dataset', type=str, default='StarLightCurves',
help='dataset(in ucr)') # LSST Heartbeat Images SelfRegulationSCP2
# parser.add_argument('--dataroot', type=str, default='../UCRArchive_2018', help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/dev_data/lz/time_series_pretrain/datasets/UCRArchive_2018',
# help='path of UCR folder')
# parser.add_argument('--dataroot', type=str, default='/SSD/lz/UCRArchive_2018', help='path of UCR folder')
parser.add_argument('--dataroot', type=str, default='/SSD/lz/UCRArchive_2018', help='path of UEA folder')
parser.add_argument('--num_classes', type=int, default=0, help='number of class')
parser.add_argument('--normalize_way', type=str, default='single', help='single or train_set')
# parser.add_argument('--seq_len', type=int, default=46, help='seq_len')
parser.add_argument('--input_size', type=int, default=1, help='input_size')
# parser.add_argument('--patch_size', type=int, default=8, help='patch_size')
# parser.add_argument('--stride', type=int, default=8, help='stride')
parser.add_argument('--target_points', type=int, default=96, help='forecast horizon')
# Patch
parser.add_argument('--patch_len', type=int, default=8, help='patch length')
parser.add_argument('--stride', type=int, default=8, help='stride between patch')
# # RevIN
# parser.add_argument('--revin', type=int, default=1, help='reversible instance normalization')
# # Model args
# parser.add_argument('--n_layers', type=int, default=3, help='number of Transformer layers')
# parser.add_argument('--n_heads', type=int, default=16, help='number of Transformer heads')
# # parser.add_argument('--d_model', type=int, default=128, help='Transformer d_model')
# parser.add_argument('--d_ff', type=int, default=256, help='Tranformer MLP dimension')
# parser.add_argument('--dropout', type=float, default=0.2, help='Transformer dropout')
# parser.add_argument('--head_dropout', type=float, default=0, help='head dropout')
# Semi training
parser.add_argument('--labeled_ratio', type=float, default='0.1', help='0.1, 0.2, 0.4')
# basic config
parser.add_argument('--task_name', type=str, required=False, default='classification',
help='task name, options:[long_term_forecast, short_term_forecast, imputation, classification, anomaly_detection]')
parser.add_argument('--freq', type=str, default='h',
help='freq for time features encoding, options:[s:secondly, t:minutely, h:hourly, d:daily, b:business days, w:weekly, m:monthly], you can also use more detailed freq like 15min or 3h')
# forecasting task
parser.add_argument('--seq_len', type=int, default=96, help='input sequence length')
parser.add_argument('--label_len', type=int, default=48, help='start token length')
parser.add_argument('--pred_len', type=int, default=0, help='prediction sequence length')
parser.add_argument('--seasonal_patterns', type=str, default='Monthly', help='subset for M4')
parser.add_argument('--inverse', action='store_true', help='inverse output data', default=False)
# model define
parser.add_argument('--top_k', type=int, default=3, help='for TimesBlock')
parser.add_argument('--num_kernels', type=int, default=6, help='for Inception')
parser.add_argument('--enc_in', type=int, default=7, help='encoder input size')
parser.add_argument('--dec_in', type=int, default=7, help='decoder input size')
parser.add_argument('--c_out', type=int, default=7, help='output size')
parser.add_argument('--d_model', type=int, default=64, help='dimension of model') ###
parser.add_argument('--n_heads', type=int, default=8, help='num of heads')
parser.add_argument('--e_layers', type=int, default=3, help='num of encoder layers')
parser.add_argument('--d_layers', type=int, default=1, help='num of decoder layers')
parser.add_argument('--d_ff', type=int, default=64, help='dimension of fcn')
parser.add_argument('--moving_avg', type=int, default=25, help='window size of moving average')
parser.add_argument('--factor', type=int, default=1, help='attn factor')
parser.add_argument('--distil', action='store_false',
help='whether to use distilling in encoder, using this argument means not using distilling',
default=True)
parser.add_argument('--dropout', type=float, default=0.1, help='dropout')
parser.add_argument('--embed', type=str, default='timeF',
help='time features encoding, options:[timeF, fixed, learned]')
parser.add_argument('--activation', type=str, default='gelu', help='activation')
parser.add_argument('--output_attention', action='store_true', help='whether to output attention in ecoder')
# GPU
parser.add_argument('--use_gpu', type=bool, default=True, help='use gpu')
parser.add_argument('--gpu', type=int, default=1, help='gpu')
# training setup
parser.add_argument('--loss', type=str, default='cross_entropy', help='loss function')
parser.add_argument('--optimizer', type=str, default='adam', help='optimizer')
parser.add_argument('--lr', type=float, default=0.001, help='learning rate')
parser.add_argument('--weight_decay', type=float, default=0.0, help='weight decay')
parser.add_argument('--batch_size', type=int, default=8, help='')
parser.add_argument('--epoch', type=int, default=50, help='training epoch')
parser.add_argument('--cuda', type=str, default='cuda:1')
parser.add_argument('--save_dir', type=str, default='/SSD/lz/time_series_label_noise/result')
parser.add_argument('--save_csv_name', type=str, default='timesnet_ucr_supervised_0801_')
# classifier setup
parser.add_argument('--classifier', type=str, default='linear', help='type of classifier(linear or nonlinear)')
parser.add_argument('--classifier_input', type=int, default=128, help='input dim of the classifiers')
args = parser.parse_args()
device = torch.device(args.cuda if torch.cuda.is_available() else "cpu")
set_seed(args)
sum_dataset, sum_target, num_classes = build_dataset(args)
# sum_dataset, sum_target, num_classes = load_UEA(args.dataroot, args.dataset)
# args.num_classes = num_classes
# args.seq_len = sum_dataset.shape[1]
sum_dataset = sum_dataset[:, :, np.newaxis]
args.num_classes = num_classes
args.seq_len = sum_dataset.shape[1]
args.input_size = sum_dataset.shape[2]
args.enc_in = sum_dataset.shape[2]
# # get number of patches
# num_patch = (max(args.seq_len, args.patch_len) - args.patch_len) // args.stride + 1
# print('number of patches:', num_patch)
while sum_dataset.shape[0] * 0.6 < args.batch_size:
args.batch_size = args.batch_size // 2
print("args.batch_size = ", args.batch_size, ", sum_dataset.shape = ", sum_dataset.shape)
# get model
model = Model(configs=args)
# model = gpt4ts(max_seq_len=args.seq_len, num_classes=args.num_classes, var_len=args.input_size, patch_size=args.patch_size, stride=args.stride)
model = model.to(device)
# model, classifier = build_model(args)
# model, classifier = model.to(device), classifier.to(device)
loss = build_loss(args).to(device)
model_init_state = model.state_dict()
# classifier_init_state = classifier.state_dict()
if args.optimizer == 'adam':
optimizer = torch.optim.Adam([{'params': model.parameters()}],
lr=args.lr, weight_decay=args.weight_decay)
elif args.optimizer == 'sgd':
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
losses = []
test_accuracies = []
train_time = 0.0
end_val_epochs = []
for i, train_dataset in enumerate(train_datasets):
t = time.time()
model.load_state_dict(model_init_state)
# classifier.load_state_dict(classifier_init_state)
print('{} fold start training and evaluate'.format(i))
train_target = train_targets[i]
val_dataset = val_datasets[i]
val_target = val_targets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
if args.normalize_way == 'single':
# TODO normalize per series
train_dataset = normalize_uea_set(train_dataset)
val_dataset = normalize_uea_set(val_dataset)
test_dataset = normalize_uea_set(test_dataset)
# else:
# train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
# test_dataset)
train_set = UEADataset(torch.from_numpy(train_dataset).type(torch.FloatTensor).to(device).permute(0,2,1),
torch.from_numpy(train_target).type(torch.FloatTensor).to(device).to(torch.int64))
val_set = UEADataset(torch.from_numpy(val_dataset).type(torch.FloatTensor).to(device).permute(0,2,1),
torch.from_numpy(val_target).type(torch.FloatTensor).to(device).to(torch.int64))
test_set = UEADataset(torch.from_numpy(test_dataset).type(torch.FloatTensor).to(device).permute(0,2,1),
torch.from_numpy(test_target).type(torch.FloatTensor).to(device).to(torch.int64))
# train_set = train_set.permute(0,2,1)
train_loader = DataLoader(train_set, batch_size=args.batch_size, num_workers=0, drop_last=True, collate_fn=lambda x: collate_fn(x, device, max_len=args.seq_len))
val_loader = DataLoader(val_set, batch_size=args.batch_size, num_workers=0, collate_fn=lambda x: collate_fn(x, device, max_len=args.seq_len))
test_loader = DataLoader(test_set, batch_size=args.batch_size, num_workers=0, collate_fn=lambda x: collate_fn(x, device, max_len=args.seq_len))
train_loss = []
train_accuracy = []
num_steps = args.epoch // args.batch_size
last_loss = float('inf')
stop_count = 0
increase_count = 0
num_steps = train_set.__len__() // args.batch_size
min_val_loss = float('inf')
test_accuracy = 0
end_val_epoch = 0
for epoch in range(args.epoch):
if stop_count == 80 or increase_count == 80:
print('model convergent at epoch {}, early stopping'.format(epoch))
break
epoch_train_loss = 0
epoch_train_acc = 0
num_iterations = 0
model.train()
train_embed = []
for x, y, padding_x_mask in train_loader:
optimizer.zero_grad()
# print("raw x.shape = ", x.shape)
# xb, num_patch = create_patch(xb=x.permute(0,2,1), patch_len=args.patch_len, stride=args.stride)
# print("x padding_x_mask.shape = ", x.shape, padding_x_mask.shape, padding_x_mask[0][:10])
pred = model(x, padding_x_mask)
step_loss = loss(pred, y)
# step_loss.backward(retain_graph=True)
step_loss.backward()
optimizer.step()
epoch_train_loss += step_loss.item()
epoch_train_acc += torch.sum(torch.argmax(pred.data, axis=1) == y) / len(y)
num_iterations += 1
epoch_train_loss /= num_steps
epoch_train_acc /= num_steps
# train_embed = np.concatenate(train_embed)
model.eval()
val_loss, val_accu = evaluate_gpt4ts(args, val_loader, model, loss)
if min_val_loss > val_loss:
min_val_loss = val_loss
end_val_epoch = epoch
test_loss, test_accuracy = evaluate_gpt4ts(args, test_loader, model, loss)
if abs(last_loss - val_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if val_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = val_loss
if epoch % 50 == 0:
print(
"epoch : {}, train loss: {} , train accuracy : {}, \ntest_accuracy : {}".format(
epoch, epoch_train_loss, epoch_train_acc, test_accuracy))
test_accuracies.append(test_accuracy)
end_val_epochs.append(end_val_epoch)
t = time.time() - t
train_time += t
print('{} fold finish training'.format(i))
test_accuracies = torch.Tensor(test_accuracies)
print("Training end: mean_test_acc = ", round(torch.mean(test_accuracies).item(), 4),
"traning time (seconds) = ",
round(train_time, 4), ", seed = ", args.random_seed)
test_accuracies = test_accuracies.cpu().numpy()
save_cls_new_result(args, np.mean(test_accuracies), np.max(test_accuracies), np.min(test_accuracies),
np.std(test_accuracies), train_time)
print('Done!')
================================================
FILE: ts_classification_methods/timesnet/models/Conv_Blocks.py
================================================
import torch
import torch.nn as nn
class Inception_Block_V1(nn.Module):
def __init__(self, in_channels, out_channels, num_kernels=6, init_weight=True):
super(Inception_Block_V1, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.num_kernels = num_kernels
kernels = []
for i in range(self.num_kernels):
kernels.append(nn.Conv2d(in_channels, out_channels, kernel_size=2 * i + 1, padding=i))
self.kernels = nn.ModuleList(kernels)
if init_weight:
self._initialize_weights()
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
if m.bias is not None:
nn.init.constant_(m.bias, 0)
def forward(self, x):
res_list = []
for i in range(self.num_kernels):
res_list.append(self.kernels[i](x))
res = torch.stack(res_list, dim=-1).mean(-1)
return res
class Inception_Block_V2(nn.Module):
def __init__(self, in_channels, out_channels, num_kernels=6, init_weight=True):
super(Inception_Block_V2, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.num_kernels = num_kernels
kernels = []
for i in range(self.num_kernels // 2):
kernels.append(nn.Conv2d(in_channels, out_channels, kernel_size=[1, 2 * i + 3], padding=[0, i + 1]))
kernels.append(nn.Conv2d(in_channels, out_channels, kernel_size=[2 * i + 3, 1], padding=[i + 1, 0]))
kernels.append(nn.Conv2d(in_channels, out_channels, kernel_size=1))
self.kernels = nn.ModuleList(kernels)
if init_weight:
self._initialize_weights()
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
if m.bias is not None:
nn.init.constant_(m.bias, 0)
def forward(self, x):
res_list = []
for i in range(self.num_kernels + 1):
res_list.append(self.kernels[i](x))
res = torch.stack(res_list, dim=-1).mean(-1)
return res
================================================
FILE: ts_classification_methods/timesnet/models/Embed.py
================================================
import torch
import torch.nn as nn
import math
class PositionalEmbedding(nn.Module):
def __init__(self, d_model, max_len=25000):
super(PositionalEmbedding, self).__init__()
# Compute the positional encodings once in log space.
pe = torch.zeros(max_len, d_model).float()
pe.require_grad = False
position = torch.arange(0, max_len).float().unsqueeze(1)
div_term = (torch.arange(0, d_model, 2).float()
* -(math.log(10000.0) / d_model)).exp()
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)
def forward(self, x):
return self.pe[:, :x.size(1)]
class TokenEmbedding(nn.Module):
def __init__(self, c_in, d_model):
super(TokenEmbedding, self).__init__()
padding = 1 if torch.__version__ >= '1.5.0' else 2
self.tokenConv = nn.Conv1d(in_channels=c_in, out_channels=d_model,
kernel_size=3, padding=padding, padding_mode='circular', bias=False)
for m in self.modules():
if isinstance(m, nn.Conv1d):
nn.init.kaiming_normal_(
m.weight, mode='fan_in', nonlinearity='leaky_relu')
def forward(self, x):
x = self.tokenConv(x.permute(0, 2, 1)).transpose(1, 2)
return x
class FixedEmbedding(nn.Module):
def __init__(self, c_in, d_model):
super(FixedEmbedding, self).__init__()
w = torch.zeros(c_in, d_model).float()
w.require_grad = False
position = torch.arange(0, c_in).float().unsqueeze(1)
div_term = (torch.arange(0, d_model, 2).float()
* -(math.log(10000.0) / d_model)).exp()
w[:, 0::2] = torch.sin(position * div_term)
w[:, 1::2] = torch.cos(position * div_term)
self.emb = nn.Embedding(c_in, d_model)
self.emb.weight = nn.Parameter(w, requires_grad=False)
def forward(self, x):
return self.emb(x).detach()
class TemporalEmbedding(nn.Module):
def __init__(self, d_model, embed_type='fixed', freq='h'):
super(TemporalEmbedding, self).__init__()
minute_size = 4
hour_size = 24
weekday_size = 7
day_size = 32
month_size = 13
Embed = FixedEmbedding if embed_type == 'fixed' else nn.Embedding
if freq == 't':
self.minute_embed = Embed(minute_size, d_model)
self.hour_embed = Embed(hour_size, d_model)
self.weekday_embed = Embed(weekday_size, d_model)
self.day_embed = Embed(day_size, d_model)
self.month_embed = Embed(month_size, d_model)
def forward(self, x):
x = x.long()
minute_x = self.minute_embed(x[:, :, 4]) if hasattr(
self, 'minute_embed') else 0.
hour_x = self.hour_embed(x[:, :, 3])
weekday_x = self.weekday_embed(x[:, :, 2])
day_x = self.day_embed(x[:, :, 1])
month_x = self.month_embed(x[:, :, 0])
return hour_x + weekday_x + day_x + month_x + minute_x
class TimeFeatureEmbedding(nn.Module):
def __init__(self, d_model, embed_type='timeF', freq='h'):
super(TimeFeatureEmbedding, self).__init__()
freq_map = {'h': 4, 't': 5, 's': 6,
'm': 1, 'a': 1, 'w': 2, 'd': 3, 'b': 3}
d_inp = freq_map[freq]
self.embed = nn.Linear(d_inp, d_model, bias=False)
def forward(self, x):
return self.embed(x)
class DataEmbedding(nn.Module):
def __init__(self, c_in, d_model, embed_type='fixed', freq='h', dropout=0.1):
super(DataEmbedding, self).__init__()
self.value_embedding = TokenEmbedding(c_in=c_in, d_model=d_model)
self.position_embedding = PositionalEmbedding(d_model=d_model)
self.temporal_embedding = TemporalEmbedding(d_model=d_model, embed_type=embed_type,
freq=freq) if embed_type != 'timeF' else TimeFeatureEmbedding(
d_model=d_model, embed_type=embed_type, freq=freq)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x, x_mark):
if x_mark is None:
x = self.value_embedding(x) + self.position_embedding(x)
else:
x = self.value_embedding(
x) + self.temporal_embedding(x_mark) + self.position_embedding(x)
return self.dropout(x)
class DataEmbedding_inverted(nn.Module):
def __init__(self, c_in, d_model, embed_type='fixed', freq='h', dropout=0.1):
super(DataEmbedding_inverted, self).__init__()
self.value_embedding = nn.Linear(c_in, d_model)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x, x_mark):
x = x.permute(0, 2, 1)
# x: [Batch Variate Time]
if x_mark is None:
x = self.value_embedding(x)
else:
x = self.value_embedding(torch.cat([x, x_mark.permute(0, 2, 1)], 1))
# x: [Batch Variate d_model]
return self.dropout(x)
class DataEmbedding_wo_pos(nn.Module):
def __init__(self, c_in, d_model, embed_type='fixed', freq='h', dropout=0.1):
super(DataEmbedding_wo_pos, self).__init__()
self.value_embedding = TokenEmbedding(c_in=c_in, d_model=d_model)
self.position_embedding = PositionalEmbedding(d_model=d_model)
self.temporal_embedding = TemporalEmbedding(d_model=d_model, embed_type=embed_type,
freq=freq) if embed_type != 'timeF' else TimeFeatureEmbedding(
d_model=d_model, embed_type=embed_type, freq=freq)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x, x_mark):
if x_mark is None:
x = self.value_embedding(x)
else:
x = self.value_embedding(x) + self.temporal_embedding(x_mark)
return self.dropout(x)
class PatchEmbedding(nn.Module):
def __init__(self, d_model, patch_len, stride, padding, dropout):
super(PatchEmbedding, self).__init__()
# Patching
self.patch_len = patch_len
self.stride = stride
self.padding_patch_layer = nn.ReplicationPad1d((0, padding))
# Backbone, Input encoding: projection of feature vectors onto a d-dim vector space
self.value_embedding = nn.Linear(patch_len, d_model, bias=False)
# Positional embedding
self.position_embedding = PositionalEmbedding(d_model)
# Residual dropout
self.dropout = nn.Dropout(dropout)
def forward(self, x):
# do patching
n_vars = x.shape[1]
x = self.padding_patch_layer(x)
x = x.unfold(dimension=-1, size=self.patch_len, step=self.stride)
x = torch.reshape(x, (x.shape[0] * x.shape[1], x.shape[2], x.shape[3]))
# Input encoding
x = self.value_embedding(x) + self.position_embedding(x)
return self.dropout(x), n_vars
================================================
FILE: ts_classification_methods/timesnet/models/SelfAttention_Family.py
================================================
import torch
import torch.nn as nn
import numpy as np
from math import sqrt
from einops import rearrange, repeat
class TriangularCausalMask():
def __init__(self, B, L, device="cpu"):
mask_shape = [B, 1, L, L]
with torch.no_grad():
self._mask = torch.triu(torch.ones(mask_shape, dtype=torch.bool), diagonal=1).to(device)
@property
def mask(self):
return self._mask
class ProbMask():
def __init__(self, B, H, L, index, scores, device="cpu"):
_mask = torch.ones(L, scores.shape[-1], dtype=torch.bool).to(device).triu(1)
_mask_ex = _mask[None, None, :].expand(B, H, L, scores.shape[-1])
indicator = _mask_ex[torch.arange(B)[:, None, None],
torch.arange(H)[None, :, None],
index, :].to(device)
self._mask = indicator.view(scores.shape).to(device)
@property
def mask(self):
return self._mask
class DSAttention(nn.Module):
'''De-stationary Attention'''
def __init__(self, mask_flag=True, factor=5, scale=None, attention_dropout=0.1, output_attention=False):
super(DSAttention, self).__init__()
self.scale = scale
self.mask_flag = mask_flag
self.output_attention = output_attention
self.dropout = nn.Dropout(attention_dropout)
def forward(self, queries, keys, values, attn_mask, tau=None, delta=None):
B, L, H, E = queries.shape
_, S, _, D = values.shape
scale = self.scale or 1. / sqrt(E)
tau = 1.0 if tau is None else tau.unsqueeze(
1).unsqueeze(1) # B x 1 x 1 x 1
delta = 0.0 if delta is None else delta.unsqueeze(
1).unsqueeze(1) # B x 1 x 1 x S
# De-stationary Attention, rescaling pre-softmax score with learned de-stationary factors
scores = torch.einsum("blhe,bshe->bhls", queries, keys) * tau + delta
if self.mask_flag:
if attn_mask is None:
attn_mask = TriangularCausalMask(B, L, device=queries.device)
scores.masked_fill_(attn_mask.mask, -np.inf)
A = self.dropout(torch.softmax(scale * scores, dim=-1))
V = torch.einsum("bhls,bshd->blhd", A, values)
if self.output_attention:
return V.contiguous(), A
else:
return V.contiguous(), None
class FullAttention(nn.Module):
def __init__(self, mask_flag=True, factor=5, scale=None, attention_dropout=0.1, output_attention=False):
super(FullAttention, self).__init__()
self.scale = scale
self.mask_flag = mask_flag
self.output_attention = output_attention
self.dropout = nn.Dropout(attention_dropout)
def forward(self, queries, keys, values, attn_mask, tau=None, delta=None):
B, L, H, E = queries.shape
_, S, _, D = values.shape
scale = self.scale or 1. / sqrt(E)
scores = torch.einsum("blhe,bshe->bhls", queries, keys)
if self.mask_flag:
if attn_mask is None:
attn_mask = TriangularCausalMask(B, L, device=queries.device)
scores.masked_fill_(attn_mask.mask, -np.inf)
A = self.dropout(torch.softmax(scale * scores, dim=-1))
V = torch.einsum("bhls,bshd->blhd", A, values)
if self.output_attention:
return V.contiguous(), A
else:
return V.contiguous(), None
class ProbAttention(nn.Module):
def __init__(self, mask_flag=True, factor=5, scale=None, attention_dropout=0.1, output_attention=False):
super(ProbAttention, self).__init__()
self.factor = factor
self.scale = scale
self.mask_flag = mask_flag
self.output_attention = output_attention
self.dropout = nn.Dropout(attention_dropout)
def _prob_QK(self, Q, K, sample_k, n_top): # n_top: c*ln(L_q)
# Q [B, H, L, D]
B, H, L_K, E = K.shape
_, _, L_Q, _ = Q.shape
# calculate the sampled Q_K
K_expand = K.unsqueeze(-3).expand(B, H, L_Q, L_K, E)
# real U = U_part(factor*ln(L_k))*L_q
index_sample = torch.randint(L_K, (L_Q, sample_k))
K_sample = K_expand[:, :, torch.arange(
L_Q).unsqueeze(1), index_sample, :]
Q_K_sample = torch.matmul(
Q.unsqueeze(-2), K_sample.transpose(-2, -1)).squeeze()
# find the Top_k query with sparisty measurement
M = Q_K_sample.max(-1)[0] - torch.div(Q_K_sample.sum(-1), L_K)
M_top = M.topk(n_top, sorted=False)[1]
# use the reduced Q to calculate Q_K
Q_reduce = Q[torch.arange(B)[:, None, None],
torch.arange(H)[None, :, None],
M_top, :] # factor*ln(L_q)
Q_K = torch.matmul(Q_reduce, K.transpose(-2, -1)) # factor*ln(L_q)*L_k
return Q_K, M_top
def _get_initial_context(self, V, L_Q):
B, H, L_V, D = V.shape
if not self.mask_flag:
# V_sum = V.sum(dim=-2)
V_sum = V.mean(dim=-2)
contex = V_sum.unsqueeze(-2).expand(B, H,
L_Q, V_sum.shape[-1]).clone()
else: # use mask
# requires that L_Q == L_V, i.e. for self-attention only
assert (L_Q == L_V)
contex = V.cumsum(dim=-2)
return contex
def _update_context(self, context_in, V, scores, index, L_Q, attn_mask):
B, H, L_V, D = V.shape
if self.mask_flag:
attn_mask = ProbMask(B, H, L_Q, index, scores, device=V.device)
scores.masked_fill_(attn_mask.mask, -np.inf)
attn = torch.softmax(scores, dim=-1) # nn.Softmax(dim=-1)(scores)
context_in[torch.arange(B)[:, None, None],
torch.arange(H)[None, :, None],
index, :] = torch.matmul(attn, V).type_as(context_in)
if self.output_attention:
attns = (torch.ones([B, H, L_V, L_V]) /
L_V).type_as(attn).to(attn.device)
attns[torch.arange(B)[:, None, None], torch.arange(H)[
None, :, None], index, :] = attn
return context_in, attns
else:
return context_in, None
def forward(self, queries, keys, values, attn_mask, tau=None, delta=None):
B, L_Q, H, D = queries.shape
_, L_K, _, _ = keys.shape
queries = queries.transpose(2, 1)
keys = keys.transpose(2, 1)
values = values.transpose(2, 1)
U_part = self.factor * \
np.ceil(np.log(L_K)).astype('int').item() # c*ln(L_k)
u = self.factor * \
np.ceil(np.log(L_Q)).astype('int').item() # c*ln(L_q)
U_part = U_part if U_part < L_K else L_K
u = u if u < L_Q else L_Q
scores_top, index = self._prob_QK(
queries, keys, sample_k=U_part, n_top=u)
# add scale factor
scale = self.scale or 1. / sqrt(D)
if scale is not None:
scores_top = scores_top * scale
# get the context
context = self._get_initial_context(values, L_Q)
# update the context with selected top_k queries
context, attn = self._update_context(
context, values, scores_top, index, L_Q, attn_mask)
return context.contiguous(), attn
class AttentionLayer(nn.Module):
def __init__(self, attention, d_model, n_heads, d_keys=None,
d_values=None):
super(AttentionLayer, self).__init__()
d_keys = d_keys or (d_model // n_heads)
d_values = d_values or (d_model // n_heads)
self.inner_attention = attention
self.query_projection = nn.Linear(d_model, d_keys * n_heads)
self.key_projection = nn.Linear(d_model, d_keys * n_heads)
self.value_projection = nn.Linear(d_model, d_values * n_heads)
self.out_projection = nn.Linear(d_values * n_heads, d_model)
self.n_heads = n_heads
def forward(self, queries, keys, values, attn_mask, tau=None, delta=None):
B, L, _ = queries.shape
_, S, _ = keys.shape
H = self.n_heads
queries = self.query_projection(queries).view(B, L, H, -1)
keys = self.key_projection(keys).view(B, S, H, -1)
values = self.value_projection(values).view(B, S, H, -1)
out, attn = self.inner_attention(
queries,
keys,
values,
attn_mask,
tau=tau,
delta=delta
)
out = out.view(B, L, -1)
return self.out_projection(out), attn
# class ReformerLayer(nn.Module):
# def __init__(self, attention, d_model, n_heads, d_keys=None,
# d_values=None, causal=False, bucket_size=4, n_hashes=4):
# super().__init__()
# self.bucket_size = bucket_size
# self.attn = LSHSelfAttention(
# dim=d_model,
# heads=n_heads,
# bucket_size=bucket_size,
# n_hashes=n_hashes,
# causal=causal
# )
#
# def fit_length(self, queries):
# # inside reformer: assert N % (bucket_size * 2) == 0
# B, N, C = queries.shape
# if N % (self.bucket_size * 2) == 0:
# return queries
# else:
# # fill the time series
# fill_len = (self.bucket_size * 2) - (N % (self.bucket_size * 2))
# return torch.cat([queries, torch.zeros([B, fill_len, C]).to(queries.device)], dim=1)
#
# def forward(self, queries, keys, values, attn_mask, tau, delta):
# # in Reformer: defalut queries=keys
# B, N, C = queries.shape
# queries = self.attn(self.fit_length(queries))[:, :N, :]
# return queries, None
class TwoStageAttentionLayer(nn.Module):
'''
The Two Stage Attention (TSA) Layer
input/output shape: [batch_size, Data_dim(D), Seg_num(L), d_model]
'''
def __init__(self, configs,
seg_num, factor, d_model, n_heads, d_ff=None, dropout=0.1):
super(TwoStageAttentionLayer, self).__init__()
d_ff = d_ff or 4 * d_model
self.time_attention = AttentionLayer(FullAttention(False, configs.factor, attention_dropout=configs.dropout,
output_attention=configs.output_attention), d_model, n_heads)
self.dim_sender = AttentionLayer(FullAttention(False, configs.factor, attention_dropout=configs.dropout,
output_attention=configs.output_attention), d_model, n_heads)
self.dim_receiver = AttentionLayer(FullAttention(False, configs.factor, attention_dropout=configs.dropout,
output_attention=configs.output_attention), d_model, n_heads)
self.router = nn.Parameter(torch.randn(seg_num, factor, d_model))
self.dropout = nn.Dropout(dropout)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.norm3 = nn.LayerNorm(d_model)
self.norm4 = nn.LayerNorm(d_model)
self.MLP1 = nn.Sequential(nn.Linear(d_model, d_ff),
nn.GELU(),
nn.Linear(d_ff, d_model))
self.MLP2 = nn.Sequential(nn.Linear(d_model, d_ff),
nn.GELU(),
nn.Linear(d_ff, d_model))
def forward(self, x, attn_mask=None, tau=None, delta=None):
# Cross Time Stage: Directly apply MSA to each dimension
batch = x.shape[0]
time_in = rearrange(x, 'b ts_d seg_num d_model -> (b ts_d) seg_num d_model')
time_enc, attn = self.time_attention(
time_in, time_in, time_in, attn_mask=None, tau=None, delta=None
)
dim_in = time_in + self.dropout(time_enc)
dim_in = self.norm1(dim_in)
dim_in = dim_in + self.dropout(self.MLP1(dim_in))
dim_in = self.norm2(dim_in)
# Cross Dimension Stage: use a small set of learnable vectors to aggregate and distribute messages to build the D-to-D connection
dim_send = rearrange(dim_in, '(b ts_d) seg_num d_model -> (b seg_num) ts_d d_model', b=batch)
batch_router = repeat(self.router, 'seg_num factor d_model -> (repeat seg_num) factor d_model', repeat=batch)
dim_buffer, attn = self.dim_sender(batch_router, dim_send, dim_send, attn_mask=None, tau=None, delta=None)
dim_receive, attn = self.dim_receiver(dim_send, dim_buffer, dim_buffer, attn_mask=None, tau=None, delta=None)
dim_enc = dim_send + self.dropout(dim_receive)
dim_enc = self.norm3(dim_enc)
dim_enc = dim_enc + self.dropout(self.MLP2(dim_enc))
dim_enc = self.norm4(dim_enc)
final_out = rearrange(dim_enc, '(b seg_num) ts_d d_model -> b ts_d seg_num d_model', b=batch)
return final_out
================================================
FILE: ts_classification_methods/timesnet/models/TimesNet.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.fft
from timesnet.models.Embed import DataEmbedding
from timesnet.models.Conv_Blocks import Inception_Block_V1
def FFT_for_Period(x, k=2):
# [B, T, C]
xf = torch.fft.rfft(x, dim=1)
# find period by amplitudes
frequency_list = abs(xf).mean(0).mean(-1)
frequency_list[0] = 0
_, top_list = torch.topk(frequency_list, k)
top_list = top_list.detach().cpu().numpy()
period = x.shape[1] // top_list
# print("period.shape = ", period.shape, top_list.shape, top_list, period)
return period, abs(xf).mean(-1)[:, top_list]
class TimesBlock(nn.Module):
def __init__(self, configs):
super(TimesBlock, self).__init__()
self.seq_len = configs.seq_len
self.pred_len = configs.pred_len
self.k = configs.top_k
# parameter-efficient design
self.conv = nn.Sequential(
Inception_Block_V1(configs.d_model, configs.d_ff,
num_kernels=configs.num_kernels),
nn.GELU(),
Inception_Block_V1(configs.d_ff, configs.d_model,
num_kernels=configs.num_kernels)
)
def forward(self, x):
# print("Input shape:", x.shape)
B, T, N = x.size()
period_list, period_weight = FFT_for_Period(x, self.k)
# print("period_list shape = ", period_list.shape, period_list)
# print("period_list period_weight shape:", period_list.shape, period_weight.shape, self.k, self.seq_len, self.pred_len)
res = []
for i in range(self.k):
period = period_list[i]
# padding
if (self.seq_len + self.pred_len) % period != 0:
length = (
((self.seq_len + self.pred_len) // period) + 1) * period
# print("length = ", length, self.seq_len, self.pred_len, period)
padding = torch.zeros([x.shape[0], (length - (self.seq_len + self.pred_len)), x.shape[2]]).to(x.device)
# print("padding x shape = ", padding.shape, x.shape)
out = torch.cat([x, padding], dim=1)
# print("padding out shape = ", out.shape)
else:
length = (self.seq_len + self.pred_len)
out = x
# print("out.shape = ", out.shape, length, period, length // period, N )
# reshape
out = out.reshape(B, length // period, period,
N).permute(0, 3, 1, 2).contiguous()
# 2D conv: from 1d Variation to 2d Variation
out = self.conv(out)
# reshape back
out = out.permute(0, 2, 3, 1).reshape(B, -1, N)
res.append(out[:, :(self.seq_len + self.pred_len), :])
res = torch.stack(res, dim=-1)
# adaptive aggregation
period_weight = F.softmax(period_weight, dim=1)
period_weight = period_weight.unsqueeze(
1).unsqueeze(1).repeat(1, T, N, 1)
res = torch.sum(res * period_weight, -1)
# residual connection
res = res + x
return res
class Model(nn.Module):
"""
Paper link: https://openreview.net/pdf?id=ju_Uqw384Oq
"""
def __init__(self, configs):
super(Model, self).__init__()
self.configs = configs
self.task_name = configs.task_name
self.seq_len = configs.seq_len
self.label_len = configs.label_len
self.pred_len = configs.pred_len
self.model = nn.ModuleList([TimesBlock(configs)
for _ in range(configs.e_layers)])
self.enc_embedding = DataEmbedding(configs.enc_in, configs.d_model, configs.embed, configs.freq,
configs.dropout)
self.layer = configs.e_layers
self.layer_norm = nn.LayerNorm(configs.d_model)
if self.task_name == 'long_term_forecast' or self.task_name == 'short_term_forecast':
self.predict_linear = nn.Linear(
self.seq_len, self.pred_len + self.seq_len)
self.projection = nn.Linear(
configs.d_model, configs.c_out, bias=True)
if self.task_name == 'imputation' or self.task_name == 'anomaly_detection':
self.projection = nn.Linear(
configs.d_model, configs.c_out, bias=True)
if self.task_name == 'classification':
self.act = F.gelu
self.dropout = nn.Dropout(configs.dropout)
self.projection = nn.Linear(
configs.d_model * configs.seq_len, configs.num_classes)
def forecast(self, x_enc, x_mark_enc, x_dec, x_mark_dec):
# Normalization from Non-stationary Transformer
means = x_enc.mean(1, keepdim=True).detach()
x_enc = x_enc - means
stdev = torch.sqrt(
torch.var(x_enc, dim=1, keepdim=True, unbiased=False) + 1e-5)
x_enc /= stdev
# embedding
enc_out = self.enc_embedding(x_enc, x_mark_enc) # [B,T,C]
enc_out = self.predict_linear(enc_out.permute(0, 2, 1)).permute(
0, 2, 1) # align temporal dimension
# TimesNet
for i in range(self.layer):
enc_out = self.layer_norm(self.model[i](enc_out))
# porject back
dec_out = self.projection(enc_out)
# De-Normalization from Non-stationary Transformer
dec_out = dec_out * \
(stdev[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
dec_out = dec_out + \
(means[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
return dec_out
def imputation(self, x_enc, x_mark_enc, x_dec, x_mark_dec, mask):
# Normalization from Non-stationary Transformer
means = torch.sum(x_enc, dim=1) / torch.sum(mask == 1, dim=1)
means = means.unsqueeze(1).detach()
x_enc = x_enc - means
x_enc = x_enc.masked_fill(mask == 0, 0)
stdev = torch.sqrt(torch.sum(x_enc * x_enc, dim=1) /
torch.sum(mask == 1, dim=1) + 1e-5)
stdev = stdev.unsqueeze(1).detach()
x_enc /= stdev
# embedding
enc_out = self.enc_embedding(x_enc, x_mark_enc) # [B,T,C]
# TimesNet
for i in range(self.layer):
enc_out = self.layer_norm(self.model[i](enc_out))
# porject back
dec_out = self.projection(enc_out)
# De-Normalization from Non-stationary Transformer
dec_out = dec_out * \
(stdev[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
dec_out = dec_out + \
(means[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
return dec_out
def anomaly_detection(self, x_enc):
# Normalization from Non-stationary Transformer
means = x_enc.mean(1, keepdim=True).detach()
x_enc = x_enc - means
stdev = torch.sqrt(
torch.var(x_enc, dim=1, keepdim=True, unbiased=False) + 1e-5)
x_enc /= stdev
# embedding
enc_out = self.enc_embedding(x_enc, None) # [B,T,C]
# TimesNet
for i in range(self.layer):
enc_out = self.layer_norm(self.model[i](enc_out))
# porject back
dec_out = self.projection(enc_out)
# De-Normalization from Non-stationary Transformer
dec_out = dec_out * \
(stdev[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
dec_out = dec_out + \
(means[:, 0, :].unsqueeze(1).repeat(
1, self.pred_len + self.seq_len, 1))
return dec_out
def classification(self, x_enc, x_mark_enc):
# embedding
enc_out = self.enc_embedding(x_enc, None) # [B,T,C]
# TimesNet
for i in range(self.layer):
enc_out = self.layer_norm(self.model[i](enc_out))
# Output
# the output transformer encoder/decoder embeddings don't include non-linearity
output = self.act(enc_out)
output = self.dropout(output)
# zero-out padding embeddings
output = output * x_mark_enc.unsqueeze(-1)
# (batch_size, seq_length * d_model)
output = output.reshape(output.shape[0], -1)
output = self.projection(output) # (batch_size, num_classes)
return output
def forward(self, x_enc, x_mark_enc, x_dec=None, x_mark_dec=None, mask=None):
if self.task_name == 'long_term_forecast' or self.task_name == 'short_term_forecast':
dec_out = self.forecast(x_enc, x_mark_enc, x_dec, x_mark_dec)
return dec_out[:, -self.pred_len:, :] # [B, L, D]
if self.task_name == 'imputation':
dec_out = self.imputation(
x_enc, x_mark_enc, x_dec, x_mark_dec, mask)
return dec_out # [B, L, D]
if self.task_name == 'anomaly_detection':
dec_out = self.anomaly_detection(x_enc)
return dec_out # [B, L, D]
if self.task_name == 'classification':
dec_out = self.classification(x_enc, x_mark_enc)
return dec_out # [B, N]
return None
================================================
FILE: ts_classification_methods/timesnet/models/Transformer.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.fft
from timesnet.models.Embed import DataEmbedding
from timesnet.models.Transformer_EncDec import Decoder, DecoderLayer, Encoder, EncoderLayer
from timesnet.models.SelfAttention_Family import FullAttention, AttentionLayer
class Model(nn.Module):
"""
Vanilla Transformer
with O(L^2) complexity
Paper link: https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
"""
def __init__(self, configs):
super(Model, self).__init__()
self.task_name = configs.task_name
self.pred_len = configs.pred_len
self.output_attention = configs.output_attention
# Embedding
self.enc_embedding = DataEmbedding(configs.enc_in, configs.d_model, configs.embed, configs.freq,
configs.dropout)
# Encoder
self.encoder = Encoder(
[
EncoderLayer(
AttentionLayer(
FullAttention(False, configs.factor, attention_dropout=configs.dropout,
output_attention=configs.output_attention), configs.d_model, configs.n_heads),
configs.d_model,
configs.d_ff,
dropout=configs.dropout,
activation=configs.activation
) for l in range(configs.e_layers)
],
norm_layer=torch.nn.LayerNorm(configs.d_model)
)
# Decoder
if self.task_name == 'long_term_forecast' or self.task_name == 'short_term_forecast':
self.dec_embedding = DataEmbedding(configs.dec_in, configs.d_model, configs.embed, configs.freq,
configs.dropout)
self.decoder = Decoder(
[
DecoderLayer(
AttentionLayer(
FullAttention(True, configs.factor, attention_dropout=configs.dropout,
output_attention=False),
configs.d_model, configs.n_heads),
AttentionLayer(
FullAttention(False, configs.factor, attention_dropout=configs.dropout,
output_attention=False),
configs.d_model, configs.n_heads),
configs.d_model,
configs.d_ff,
dropout=configs.dropout,
activation=configs.activation,
)
for l in range(configs.d_layers)
],
norm_layer=torch.nn.LayerNorm(configs.d_model),
projection=nn.Linear(configs.d_model, configs.c_out, bias=True)
)
if self.task_name == 'imputation':
self.projection = nn.Linear(configs.d_model, configs.c_out, bias=True)
if self.task_name == 'anomaly_detection':
self.projection = nn.Linear(configs.d_model, configs.c_out, bias=True)
if self.task_name == 'classification':
self.act = F.gelu
self.dropout = nn.Dropout(configs.dropout)
self.projection = nn.Linear(configs.d_model * configs.seq_len, configs.num_classes)
def forecast(self, x_enc, x_mark_enc, x_dec, x_mark_dec):
# Embedding
enc_out = self.enc_embedding(x_enc, x_mark_enc)
enc_out, attns = self.encoder(enc_out, attn_mask=None)
dec_out = self.dec_embedding(x_dec, x_mark_dec)
dec_out = self.decoder(dec_out, enc_out, x_mask=None, cross_mask=None)
return dec_out
def imputation(self, x_enc, x_mark_enc, x_dec, x_mark_dec, mask):
# Embedding
enc_out = self.enc_embedding(x_enc, x_mark_enc)
enc_out, attns = self.encoder(enc_out, attn_mask=None)
dec_out = self.projection(enc_out)
return dec_out
def anomaly_detection(self, x_enc):
# Embedding
enc_out = self.enc_embedding(x_enc, None)
enc_out, attns = self.encoder(enc_out, attn_mask=None)
dec_out = self.projection(enc_out)
return dec_out
def classification(self, x_enc, x_mark_enc):
# print("1 x_enc.shape = ", x_enc.shape, x_mark_enc.shape)
# Embedding
enc_out = self.enc_embedding(x_enc, None)
# print("2 x_enc.shape = ", enc_out.shape)
enc_out, attns = self.encoder(enc_out, attn_mask=None)
# print("3 enc_out.shape = ", enc_out.shape)
# Output
output = self.act(enc_out) # the output transformer encoder/decoder embeddings don't include non-linearity
# print("4 output.shape = ", output.shape)
output = self.dropout(output)
# print("5 output.shape = ", output.shape)
output = output * x_mark_enc.unsqueeze(-1) # zero-out padding embeddings
# print("6 output.shape = ", output.shape)
output = output.reshape(output.shape[0], -1) # (batch_size, seq_length * d_model)
# print("7 output.shape = ", output.shape)
output = self.projection(output) # (batch_size, num_classes)
return output
def forward(self, x_enc, x_mark_enc, x_dec=None, x_mark_dec=None, mask=None):
if self.task_name == 'long_term_forecast' or self.task_name == 'short_term_forecast':
dec_out = self.forecast(x_enc, x_mark_enc, x_dec, x_mark_dec)
return dec_out[:, -self.pred_len:, :] # [B, L, D]
if self.task_name == 'imputation':
dec_out = self.imputation(x_enc, x_mark_enc, x_dec, x_mark_dec, mask)
return dec_out # [B, L, D]
if self.task_name == 'anomaly_detection':
dec_out = self.anomaly_detection(x_enc)
return dec_out # [B, L, D]
if self.task_name == 'classification':
dec_out = self.classification(x_enc, x_mark_enc)
return dec_out # [B, N]
return None
================================================
FILE: ts_classification_methods/timesnet/models/Transformer_EncDec.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
class ConvLayer(nn.Module):
def __init__(self, c_in):
super(ConvLayer, self).__init__()
self.downConv = nn.Conv1d(in_channels=c_in,
out_channels=c_in,
kernel_size=3,
padding=2,
padding_mode='circular')
self.norm = nn.BatchNorm1d(c_in)
self.activation = nn.ELU()
self.maxPool = nn.MaxPool1d(kernel_size=3, stride=2, padding=1)
def forward(self, x):
x = self.downConv(x.permute(0, 2, 1))
x = self.norm(x)
x = self.activation(x)
x = self.maxPool(x)
x = x.transpose(1, 2)
return x
class EncoderLayer(nn.Module):
def __init__(self, attention, d_model, d_ff=None, dropout=0.1, activation="relu"):
super(EncoderLayer, self).__init__()
d_ff = d_ff or 4 * d_model
self.attention = attention
self.conv1 = nn.Conv1d(in_channels=d_model, out_channels=d_ff, kernel_size=1)
self.conv2 = nn.Conv1d(in_channels=d_ff, out_channels=d_model, kernel_size=1)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(dropout)
self.activation = F.relu if activation == "relu" else F.gelu
def forward(self, x, attn_mask=None, tau=None, delta=None):
new_x, attn = self.attention(
x, x, x,
attn_mask=attn_mask,
tau=tau, delta=delta
)
x = x + self.dropout(new_x)
y = x = self.norm1(x)
y = self.dropout(self.activation(self.conv1(y.transpose(-1, 1))))
y = self.dropout(self.conv2(y).transpose(-1, 1))
return self.norm2(x + y), attn
class Encoder(nn.Module):
def __init__(self, attn_layers, conv_layers=None, norm_layer=None):
super(Encoder, self).__init__()
self.attn_layers = nn.ModuleList(attn_layers)
self.conv_layers = nn.ModuleList(conv_layers) if conv_layers is not None else None
self.norm = norm_layer
def forward(self, x, attn_mask=None, tau=None, delta=None):
# x [B, L, D]
attns = []
if self.conv_layers is not None:
for i, (attn_layer, conv_layer) in enumerate(zip(self.attn_layers, self.conv_layers)):
delta = delta if i == 0 else None
x, attn = attn_layer(x, attn_mask=attn_mask, tau=tau, delta=delta)
x = conv_layer(x)
attns.append(attn)
x, attn = self.attn_layers[-1](x, tau=tau, delta=None)
attns.append(attn)
else:
for attn_layer in self.attn_layers:
x, attn = attn_layer(x, attn_mask=attn_mask, tau=tau, delta=delta)
attns.append(attn)
if self.norm is not None:
x = self.norm(x)
return x, attns
class DecoderLayer(nn.Module):
def __init__(self, self_attention, cross_attention, d_model, d_ff=None,
dropout=0.1, activation="relu"):
super(DecoderLayer, self).__init__()
d_ff = d_ff or 4 * d_model
self.self_attention = self_attention
self.cross_attention = cross_attention
self.conv1 = nn.Conv1d(in_channels=d_model, out_channels=d_ff, kernel_size=1)
self.conv2 = nn.Conv1d(in_channels=d_ff, out_channels=d_model, kernel_size=1)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.norm3 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(dropout)
self.activation = F.relu if activation == "relu" else F.gelu
def forward(self, x, cross, x_mask=None, cross_mask=None, tau=None, delta=None):
x = x + self.dropout(self.self_attention(
x, x, x,
attn_mask=x_mask,
tau=tau, delta=None
)[0])
x = self.norm1(x)
x = x + self.dropout(self.cross_attention(
x, cross, cross,
attn_mask=cross_mask,
tau=tau, delta=delta
)[0])
y = x = self.norm2(x)
y = self.dropout(self.activation(self.conv1(y.transpose(-1, 1))))
y = self.dropout(self.conv2(y).transpose(-1, 1))
return self.norm3(x + y)
class Decoder(nn.Module):
def __init__(self, layers, norm_layer=None, projection=None):
super(Decoder, self).__init__()
self.layers = nn.ModuleList(layers)
self.norm = norm_layer
self.projection = projection
def forward(self, x, cross, x_mask=None, cross_mask=None, tau=None, delta=None):
for layer in self.layers:
x = layer(x, cross, x_mask=x_mask, cross_mask=cross_mask, tau=tau, delta=delta)
if self.norm is not None:
x = self.norm(x)
if self.projection is not None:
x = self.projection(x)
return x
================================================
FILE: ts_classification_methods/timesnet/models/__init__.py
================================================
================================================
FILE: ts_classification_methods/timesnet/scripts/generator_timesnet.py
================================================
uea_all = ['ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions', 'CharacterTrajectories',
'Cricket', 'DuckDuckGeese', 'EigenWorms', 'Epilepsy', 'EthanolConcentration', 'ERing',
'FaceDetection', 'FingerMovements', 'HandMovementDirection', 'Handwriting',
'Heartbeat', 'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PenDigits', 'PEMS-SF', 'PhonemeSpectra', 'RacketSports', 'SelfRegulationSCP1',
'SelfRegulationSCP2', 'SpokenArabicDigits', 'StandWalkJump', 'UWaveGestureLibrary']
ucr_dataset = ['ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY', 'AllGestureWiimoteZ', 'ArrowHead', 'BME',
'Beef',
'BeetleFly', 'BirdChicken', 'CBF', 'Car', 'Chinatown', 'ChlorineConcentration', 'CinCECGTorso', 'Coffee',
'Computers',
'CricketX', 'CricketY', 'CricketZ', 'Crop', 'DiatomSizeReduction', 'DistalPhalanxOutlineAgeGroup',
'DistalPhalanxOutlineCorrect', 'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame', 'DodgerLoopWeekend',
'ECG200', 'ECG5000', 'ECGFiveDays', 'EOGHorizontalSignal', 'EOGVerticalSignal', 'Earthquakes',
'ElectricDevices',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords', 'Fish', 'FordA', 'FordB',
'FreezerRegularTrain',
'FreezerSmallTrain', 'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3', 'GesturePebbleZ1',
'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan', 'GunPointMaleVersusFemale', 'GunPointOldVersusYoung',
'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate', 'InsectEPGRegularTrain',
'InsectEPGSmallTrain',
'InsectWingbeatSound', 'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2', 'Lightning7',
'Mallat', 'Meat',
'MedicalImages', 'MelbournePedestrian', 'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain', 'MoteStrain',
'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OSULeaf', 'OliveOil', 'PLAID', 'PhalangesOutlinesCorrect', 'Phoneme',
'PickupGestureWiimoteZ', 'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'Plane', 'PowerCons',
'ProximalPhalanxOutlineAgeGroup', 'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices',
'Rock', 'ScreenType', 'SemgHandGenderCh2', 'SemgHandMovementCh2', 'SemgHandSubjectCh2',
'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace', 'SonyAIBORobotSurface1',
'SonyAIBORobotSurface2', 'StarLightCurves', 'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG', 'TwoPatterns', 'UMD',
'UWaveGestureLibraryAll',
'UWaveGestureLibraryX', 'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine', 'WordSynonyms',
'Worms',
'WormsTwoClass', 'Yoga']
code_main = 'main_timesnet_ucr' ## main_timesnet_ucr main_timesnet
i = 1
for dataset in ucr_dataset:
print("i = ", i, "dataset_name = ", dataset)
i = i + 1
save_csv_name = code_main + '_0702_' ## --len_k
with open('/SSD/lz/time_series_label_noise/timesnet/scripts/timesnet_uea.sh', 'a') as f:
f.write('python '+ code_main + '.py ' +
'--dataset ' + dataset
+ ' --epoch 1000 ' +
'--save_csv_name ' + save_csv_name + ' --cuda cuda:1' + ';\n')
## nohup ./scripts/timesnet_uea.sh &
## nohup ./scripts/uea_transform.sh &
================================================
FILE: ts_classification_methods/tloss_cls/default_hyperparameters.json
================================================
{
"batch_size": 10,
"channels": 40,
"compared_length": null,
"depth": 10,
"nb_steps": 600,
"in_channels": 1,
"kernel_size": 3,
"penalty": null,
"early_stopping": null,
"lr": 0.001,
"nb_random_samples": 10,
"negative_penalty": 1,
"out_channels": 320,
"reduced_size": 160
}
================================================
FILE: ts_classification_methods/tloss_cls/losses/__init__.py
================================================
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import pkgutil
__all__ = []
for loader, module_name, is_pkg in pkgutil.walk_packages(__path__):
__all__.append(module_name)
module = loader.find_module(module_name).load_module(module_name)
exec('%s = module' % module_name)
================================================
FILE: ts_classification_methods/tloss_cls/losses/triplet_loss.py
================================================
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import torch
import numpy
class TripletLoss(torch.nn.modules.loss._Loss):
"""
Triplet loss for representations of time series. Optimized for training
sets where all time series have the same length.
Takes as input a tensor as the chosen batch to compute the loss,
a PyTorch module as the encoder, a 3D tensor (`B`, `C`, `L`) containing
the training set, where `B` is the batch size, `C` is the number of
channels and `L` is the length of the time series, as well as a boolean
which, if True, enables to save GPU memory by propagating gradients after
each loss term, instead of doing it after computing the whole loss.
The triplets are chosen in the following manner. First the size of the
positive and negative samples are randomly chosen in the range of lengths
of time series in the dataset. The size of the anchor time series is
randomly chosen with the same length upper bound but the the length of the
positive samples as lower bound. An anchor of this length is then chosen
randomly in the given time series of the train set, and positive samples
are randomly chosen among subseries of the anchor. Finally, negative
samples of the chosen length are randomly chosen in random time series of
the train set.
@param compared_length Maximum length of randomly chosen time series. If
None, this parameter is ignored.
@param nb_random_samples Number of negative samples per batch example.
@param negative_penalty Multiplicative coefficient for the negative sample
loss.
"""
def __init__(self, compared_length, nb_random_samples, negative_penalty):
super(TripletLoss, self).__init__()
self.compared_length = compared_length
if self.compared_length is None:
self.compared_length = numpy.inf
self.nb_random_samples = nb_random_samples
self.negative_penalty = negative_penalty
def forward(self, batch, encoder, train, save_memory=False):
batch_size = batch.size(0)
train_size = train.size(0)
length = min(self.compared_length, train.size(2))
# For each batch element, we pick nb_random_samples possible random
# time series in the training set (choice of batches from where the
# negative examples will be sampled)
samples = numpy.random.choice(
train_size, size=(self.nb_random_samples, batch_size)
)
samples = torch.LongTensor(samples)
# Choice of length of positive and negative samples
length_pos_neg = numpy.random.randint(1, high=length + 1)
# We choose for each batch example a random interval in the time
# series, which is the 'anchor'
random_length = numpy.random.randint(
length_pos_neg, high=length + 1
) # Length of anchors
beginning_batches = numpy.random.randint(
0, high=length - random_length + 1, size=batch_size
) # Start of anchors
# The positive samples are chosen at random in the chosen anchors
beginning_samples_pos = numpy.random.randint(
0, high=random_length - length_pos_neg + 1, size=batch_size
) # Start of positive samples in the anchors
# Start of positive samples in the batch examples
beginning_positive = beginning_batches + beginning_samples_pos
# End of positive samples in the batch examples
end_positive = beginning_positive + length_pos_neg
# We randomly choose nb_random_samples potential negative samples for
# each batch example
beginning_samples_neg = numpy.random.randint(
0, high=length - length_pos_neg + 1,
size=(self.nb_random_samples, batch_size)
)
representation = encoder(torch.cat(
[batch[
j: j + 1, :,
beginning_batches[j]: beginning_batches[j] + random_length
] for j in range(batch_size)]
)) # Anchors representations
positive_representation = encoder(torch.cat(
[batch[
j: j + 1, :, end_positive[j] - length_pos_neg: end_positive[j]
] for j in range(batch_size)]
)) # Positive samples representations
size_representation = representation.size(1)
# Positive loss: -logsigmoid of dot product between anchor and positive
# representations
loss = -torch.mean(torch.nn.functional.logsigmoid(torch.bmm(
representation.view(batch_size, 1, size_representation),
positive_representation.view(batch_size, size_representation, 1)
)))
# If required, backward through the first computed term of the loss and
# free from the graph everything related to the positive sample
if save_memory:
loss.backward(retain_graph=True)
loss = 0
del positive_representation
torch.cuda.empty_cache()
multiplicative_ratio = self.negative_penalty / self.nb_random_samples
for i in range(self.nb_random_samples):
# Negative loss: -logsigmoid of minus the dot product between
# anchor and negative representations
negative_representation = encoder(
torch.cat([train[samples[i, j]: samples[i, j] + 1][
:, :,
beginning_samples_neg[i, j]:
beginning_samples_neg[i, j] + length_pos_neg
] for j in range(batch_size)])
)
loss += multiplicative_ratio * -torch.mean(
torch.nn.functional.logsigmoid(-torch.bmm(
representation.view(batch_size, 1, size_representation),
negative_representation.view(
batch_size, size_representation, 1
)
))
)
# If required, backward through the first computed term of the loss
# and free from the graph everything related to the negative sample
# Leaves the last backward pass to the training procedure
if save_memory and i != self.nb_random_samples - 1:
loss.backward(retain_graph=True)
loss = 0
del negative_representation
torch.cuda.empty_cache()
return loss
class TripletLossVaryingLength(torch.nn.modules.loss._Loss):
"""
Triplet loss for representations of time series where the training set
features time series with unequal lengths.
Takes as input a tensor as the chosen batch to compute the loss,
a PyTorch module as the encoder, a 3D tensor (`B`, `C`, `L`) containing the
training set, where `B` is the batch size, `C` is the number of channels
and `L` is the maximum length of the time series (NaN values representing
the end of a shorter time series), as well as a boolean which, if True,
enables to save GPU memory by propagating gradients after each loss term,
instead of doing it after computing the whole loss.
The triplets are chosen in the following manner. First the sizes of
positive and negative samples are randomly chosen in the range of lengths
of time series in the dataset. The size of the anchor time series is
randomly chosen with the same length upper bound but the the length of the
positive samples as lower bound. An anchor of this length is then chosen
randomly in the given time series of the train set, and positive samples
are randomly chosen among subseries of the anchor. Finally, negative
samples of the chosen length are randomly chosen in random time series of
the train set.
@param compared_length Maximum length of randomly chosen time series. If
None, this parameter is ignored.
@param nb_random_samples Number of negative samples per batch example.
@param negative_penalty Multiplicative coefficient for the negative sample
loss.
"""
def __init__(self, compared_length, nb_random_samples, negative_penalty):
super(TripletLossVaryingLength, self).__init__()
self.compared_length = compared_length
if self.compared_length is None:
self.compared_length = numpy.inf
self.nb_random_samples = nb_random_samples
self.negative_penalty = negative_penalty
def forward(self, batch, encoder, train, save_memory=False):
batch_size = batch.size(0)
train_size = train.size(0)
max_length = train.size(2)
# For each batch element, we pick nb_random_samples possible random
# time series in the training set (choice of batches from where the
# negative examples will be sampled)
samples = numpy.random.choice(
train_size, size=(self.nb_random_samples, batch_size)
)
samples = torch.LongTensor(samples)
# Computation of the lengths of the relevant time series
with torch.no_grad():
lengths_batch = max_length - torch.sum(
torch.isnan(batch[:, 0]), 1
).data.cpu().numpy()
lengths_samples = numpy.empty(
(self.nb_random_samples, batch_size), dtype=int
)
for i in range(self.nb_random_samples):
lengths_samples[i] = max_length - torch.sum(
torch.isnan(train[samples[i], 0]), 1
).data.cpu().numpy()
# Choice of lengths of positive and negative samples
lengths_pos = numpy.empty(batch_size, dtype=int)
lengths_neg = numpy.empty(
(self.nb_random_samples, batch_size), dtype=int
)
for j in range(batch_size):
lengths_pos[j] = numpy.random.randint(
1, high=min(self.compared_length, lengths_batch[j]) + 1
)
for i in range(self.nb_random_samples):
lengths_neg[i, j] = numpy.random.randint(
1,
high=min(self.compared_length, lengths_samples[i, j]) + 1
)
# We choose for each batch example a random interval in the time
# series, which is the 'anchor'
random_length = numpy.array([numpy.random.randint(
lengths_pos[j],
high=min(self.compared_length, lengths_batch[j]) + 1
) for j in range(batch_size)]) # Length of anchors
beginning_batches = numpy.array([numpy.random.randint(
0, high=lengths_batch[j] - random_length[j] + 1
) for j in range(batch_size)]) # Start of anchors
# The positive samples are chosen at random in the chosen anchors
# Start of positive samples in the anchors
beginning_samples_pos = numpy.array([numpy.random.randint(
0, high=random_length[j] - lengths_pos[j] + 1
) for j in range(batch_size)])
# Start of positive samples in the batch examples
beginning_positive = beginning_batches + beginning_samples_pos
# End of positive samples in the batch examples
end_positive = beginning_positive + lengths_pos
# We randomly choose nb_random_samples potential negative samples for
# each batch example
beginning_samples_neg = numpy.array([[numpy.random.randint(
0, high=lengths_samples[i, j] - lengths_neg[i, j] + 1
) for j in range(batch_size)] for i in range(self.nb_random_samples)])
representation = torch.cat([encoder(
batch[
j: j + 1, :,
beginning_batches[j]: beginning_batches[j] + random_length[j]
]
) for j in range(batch_size)]) # Anchors representations
positive_representation = torch.cat([encoder(
batch[
j: j + 1, :,
end_positive[j] - lengths_pos[j]: end_positive[j]
]
) for j in range(batch_size)]) # Positive samples representations
size_representation = representation.size(1)
# Positive loss: -logsigmoid of dot product between anchor and positive
# representations
loss = -torch.mean(torch.nn.functional.logsigmoid(torch.bmm(
representation.view(batch_size, 1, size_representation),
positive_representation.view(batch_size, size_representation, 1)
)))
# If required, backward through the first computed term of the loss and
# free from the graph everything related to the positive sample
if save_memory:
loss.backward(retain_graph=True)
loss = 0
del positive_representation
torch.cuda.empty_cache()
multiplicative_ratio = self.negative_penalty / self.nb_random_samples
for i in range(self.nb_random_samples):
# Negative loss: -logsigmoid of minus the dot product between
# anchor and negative representations
negative_representation = torch.cat([encoder(
train[samples[i, j]: samples[i, j] + 1][
:, :,
beginning_samples_neg[i, j]:
beginning_samples_neg[i, j] + lengths_neg[i, j]
]
) for j in range(batch_size)])
loss += multiplicative_ratio * -torch.mean(
torch.nn.functional.logsigmoid(-torch.bmm(
representation.view(batch_size, 1, size_representation),
negative_representation.view(
batch_size, size_representation, 1
)
))
)
# If required, backward through the first computed term of the loss
# and free from the graph everything related to the negative sample
# Leaves the last backward pass to the training procedure
if save_memory and i != self.nb_random_samples - 1:
loss.backward(retain_graph=True)
loss = 0
del negative_representation
torch.cuda.empty_cache()
return loss
================================================
FILE: ts_classification_methods/tloss_cls/networks/__init__.py
================================================
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import pkgutil
__all__ = []
for loader, module_name, is_pkg in pkgutil.walk_packages(__path__):
__all__.append(module_name)
module = loader.find_module(module_name).load_module(module_name)
exec('%s = module' % module_name)
================================================
FILE: ts_classification_methods/tloss_cls/networks/causal_cnn.py
================================================
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# Implementation of causal CNNs partly taken and modified from
# https://github.com/locuslab/TCN/blob/master/TCN/tcn.py, originally created
# with the following license.
# MIT License
# Copyright (c) 2018 CMU Locus Lab
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import torch
class Chomp1d(torch.nn.Module):
"""
Removes the last elements of a time series.
Takes as input a three-dimensional tensor (`B`, `C`, `L`) where `B` is the
batch size, `C` is the number of input channels, and `L` is the length of
the input. Outputs a three-dimensional tensor (`B`, `C`, `L - s`) where `s`
is the number of elements to remove.
@param chomp_size Number of elements to remove.
"""
def __init__(self, chomp_size):
super(Chomp1d, self).__init__()
self.chomp_size = chomp_size
def forward(self, x):
return x[:, :, :-self.chomp_size]
class SqueezeChannels(torch.nn.Module):
"""
Squeezes, in a three-dimensional tensor, the third dimension.
"""
def __init__(self):
super(SqueezeChannels, self).__init__()
def forward(self, x):
return x.squeeze(2)
class CausalConvolutionBlock(torch.nn.Module):
"""
Causal convolution block, composed sequentially of two causal convolutions
(with leaky ReLU activation functions), and a parallel residual connection.
Takes as input a three-dimensional tensor (`B`, `C`, `L`) where `B` is the
batch size, `C` is the number of input channels, and `L` is the length of
the input. Outputs a three-dimensional tensor (`B`, `C`, `L`).
@param in_channels Number of input channels.
@param out_channels Number of output channels.
@param kernel_size Kernel size of the applied non-residual convolutions.
@param dilation Dilation parameter of non-residual convolutions.
@param final Disables, if True, the last activation function.
"""
def __init__(self, in_channels, out_channels, kernel_size, dilation,
final=False):
super(CausalConvolutionBlock, self).__init__()
# Computes left padding so that the applied convolutions are causal
padding = (kernel_size - 1) * dilation
# First causal convolution
conv1 = torch.nn.utils.weight_norm(torch.nn.Conv1d(
in_channels, out_channels, kernel_size,
padding=padding, dilation=dilation
))
# The truncation makes the convolution causal
chomp1 = Chomp1d(padding)
relu1 = torch.nn.LeakyReLU()
# Second causal convolution
conv2 = torch.nn.utils.weight_norm(torch.nn.Conv1d(
out_channels, out_channels, kernel_size,
padding=padding, dilation=dilation
))
chomp2 = Chomp1d(padding)
relu2 = torch.nn.LeakyReLU()
# Causal network
self.causal = torch.nn.Sequential(
conv1, chomp1, relu1, conv2, chomp2, relu2
)
# Residual connection
self.upordownsample = torch.nn.Conv1d(
in_channels, out_channels, 1
) if in_channels != out_channels else None
# Final activation function
self.relu = torch.nn.LeakyReLU() if final else None
def forward(self, x):
out_causal = self.causal(x)
res = x if self.upordownsample is None else self.upordownsample(x)
if self.relu is None:
return out_causal + res
else:
return self.relu(out_causal + res)
class CausalCNN(torch.nn.Module):
"""
Causal CNN, composed of a sequence of causal convolution blocks.
Takes as input a three-dimensional tensor (`B`, `C`, `L`) where `B` is the
batch size, `C` is the number of input channels, and `L` is the length of
the input. Outputs a three-dimensional tensor (`B`, `C_out`, `L`).
@param in_channels Number of input channels.
@param channels Number of channels processed in the network and of output
channels.
@param depth Depth of the network.
@param out_channels Number of output channels.
@param kernel_size Kernel size of the applied non-residual convolutions.
"""
def __init__(self, in_channels, channels, depth, out_channels,
kernel_size):
super(CausalCNN, self).__init__()
layers = [] # List of causal convolution blocks
dilation_size = 1 # Initial dilation size
for i in range(depth):
in_channels_block = in_channels if i == 0 else channels
layers += [CausalConvolutionBlock(
in_channels_block, channels, kernel_size, dilation_size
)]
dilation_size *= 2 # Doubles the dilation size at each step
# Last layer
layers += [CausalConvolutionBlock(
channels, out_channels, kernel_size, dilation_size
)]
self.network = torch.nn.Sequential(*layers)
def forward(self, x):
return self.network(x)
class CausalCNNEncoder(torch.nn.Module):
"""
Encoder of a time series using a causal CNN: the computed representation is
the output of a fully connected layer applied to the output of an adaptive
max pooling layer applied on top of the causal CNN, which reduces the
length of the time series to a fixed size.
Takes as input a three-dimensional tensor (`B`, `C`, `L`) where `B` is the
batch size, `C` is the number of input channels, and `L` is the length of
the input. Outputs a three-dimensional tensor (`B`, `C`).
@param in_channels Number of input channels.
@param channels Number of channels manipulated in the causal CNN.
@param depth Depth of the causal CNN.
@param reduced_size Fixed length to which the output time series of the
causal CNN is reduced.
@param out_channels Number of output channels.
@param kernel_size Kernel size of the applied non-residual convolutions.
"""
def __init__(self, in_channels, channels, depth, reduced_size,
out_channels, kernel_size):
super(CausalCNNEncoder, self).__init__()
causal_cnn = CausalCNN(
in_channels, channels, depth, reduced_size, kernel_size
)
reduce_size = torch.nn.AdaptiveMaxPool1d(1)
squeeze = SqueezeChannels() # Squeezes the third dimension (time)
linear = torch.nn.Linear(reduced_size, out_channels)
self.network = torch.nn.Sequential(
causal_cnn, reduce_size, squeeze, linear
)
def forward(self, x):
return self.network(x)
================================================
FILE: ts_classification_methods/tloss_cls/networks/lstm.py
================================================
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import torch
class LSTMEncoder(torch.nn.Module):
"""
Encoder of a time series using a LSTM, ccomputing a linear transformation
of the output of an LSTM
Takes as input a three-dimensional tensor (`B`, `C`, `L`) where `B` is the
batch size, `C` is the number of input channels, and `L` is the length of
the input. Outputs a three-dimensional tensor (`B`, `C`).
Only works for one-dimensional time series.
"""
def __init__(self):
super(LSTMEncoder, self).__init__()
self.lstm = torch.nn.LSTM(
input_size=1, hidden_size=256, num_layers=2
)
self.linear = torch.nn.Linear(256, 160)
def forward(self, x):
return self.linear(self.lstm(x.permute(2, 0, 1))[0][-1])
================================================
FILE: ts_classification_methods/tloss_cls/scikit_wrappers.py
================================================
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import math
import numpy
import torch
import sklearn
import sklearn.svm
import sklearn.externals
import sklearn.model_selection
import utils
import losses
import networks
class TimeSeriesEncoderClassifier(sklearn.base.BaseEstimator,
sklearn.base.ClassifierMixin):
"""
"Virtual" class to wrap an encoder of time series as a PyTorch module and
a SVM classifier with RBF kernel on top of its computed representations in
a scikit-learn class.
All inheriting classes should implement the get_params and set_params
methods, as in the recommendations of scikit-learn.
@param compared_length Maximum length of randomly chosen time series. If
None, this parameter is ignored.
@param nb_random_samples Number of randomly chosen intervals to select the
final negative sample in the loss.
@param negative_penalty Multiplicative coefficient for the negative sample
loss.
@param batch_size Batch size used during the training of the encoder.
@param nb_steps Number of optimization steps to perform for the training of
the encoder.
@param lr learning rate of the Adam optimizer used to train the encoder.
@param penalty Penalty term for the SVM classifier. If None and if the
number of samples is high enough, performs a hyperparameter search
to find a suitable constant.
@param early_stopping Enables, if not None, early stopping heuristic
for the training of the representations, based on the final
score. Representations are still learned unsupervisedly in this
case. If the number of samples per class is no more than 10,
disables this heuristic. If not None, accepts an integer
representing the patience of the early stopping strategy.
@param encoder Encoder PyTorch module.
@param params Dictionaries of the parameters of the encoder.
@param in_channels Number of input channels of the time series.
@param cuda Transfers, if True, all computations to the GPU.
@param gpu GPU index to use, if CUDA is enabled.
"""
def __init__(self, compared_length, nb_random_samples, negative_penalty,
batch_size, nb_steps, lr, penalty, early_stopping,
encoder, params, in_channels, out_channels, cuda=False,
gpu=0):
self.architecture = ''
self.cuda = cuda
self.gpu = gpu
self.batch_size = batch_size
self.nb_steps = nb_steps
self.lr = lr
self.penalty = penalty
self.early_stopping = early_stopping
self.encoder = encoder
self.params = params
self.in_channels = in_channels
self.out_channels = out_channels
self.loss = losses.triplet_loss.TripletLoss(
compared_length, nb_random_samples, negative_penalty
)
self.loss_varying = losses.triplet_loss.TripletLossVaryingLength(
compared_length, nb_random_samples, negative_penalty
)
self.classifier = sklearn.svm.SVC()
self.optimizer = torch.optim.Adam(self.encoder.parameters(), lr=lr)
def save_encoder(self, prefix_file):
"""
Saves the encoder and the SVM classifier.
@param prefix_file Path and prefix of the file where the models should
be saved (at '$(prefix_file)_$(architecture)_encoder.pth').
"""
torch.save(
self.encoder.state_dict(),
prefix_file + '_' + self.architecture + '_encoder.pth'
)
def save(self, prefix_file):
"""
Saves the encoder and the SVM classifier.
@param prefix_file Path and prefix of the file where the models should
be saved (at '$(prefix_file)_$(architecture)_classifier.pkl' and
'$(prefix_file)_$(architecture)_encoder.pth').
"""
self.save_encoder(prefix_file)
#sklearn.externals.
'''
sklearn.externals.joblib.dump(
self.classifier,
prefix_file + '_' + self.architecture + '_classifier.pkl'
)
'''
def load_encoder(self, prefix_file):
"""
Loads an encoder.
@param prefix_file Path and prefix of the file where the model should
be loaded (at '$(prefix_file)_$(architecture)_encoder.pth').
"""
if self.cuda:
self.encoder.load_state_dict(torch.load(
prefix_file + '_' + self.architecture + '_encoder.pth',
map_location=lambda storage, loc: storage.cuda(self.gpu)
))
else:
self.encoder.load_state_dict(torch.load(
prefix_file + '_' + self.architecture + '_encoder.pth',
map_location=lambda storage, loc: storage
))
def load(self, prefix_file):
"""
Loads an encoder and an SVM classifier.
@param prefix_file Path and prefix of the file where the models should
be loaded (at '$(prefix_file)_$(architecture)_classifier.pkl'
and '$(prefix_file)_$(architecture)_encoder.pth').
"""
self.load_encoder(prefix_file)
self.classifier = sklearn.externals.joblib.load(
prefix_file + '_' + self.architecture + '_classifier.pkl'
)
def fit_classifier(self, features, y):
"""
Trains the classifier using precomputed features. Uses an SVM
classifier with RBF kernel.
@param features Computed features of the training set.
@param y Training labels.
"""
nb_classes = numpy.shape(numpy.unique(y, return_counts=True)[1])[0]
train_size = numpy.shape(features)[0]
# To use a 1-NN classifier, no need for model selection, simply
# replace the code by the following:
# import sklearn.neighbors
# self.classifier = sklearn.neighbors.KNeighborsClassifier(
# n_neighbors=1
# )
# return self.classifier.fit(features, y)
self.classifier = sklearn.svm.SVC(
C=1 / self.penalty
if self.penalty is not None and self.penalty > 0
else numpy.inf,
gamma='scale'
)
if train_size // nb_classes < 5 or train_size < 50 or self.penalty is not None:
return self.classifier.fit(features, y)
else:
grid_search = sklearn.model_selection.GridSearchCV(
self.classifier, {
'C': [
0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000,
numpy.inf
],
'kernel': ['rbf'],
'degree': [3],
'gamma': ['scale'],
'coef0': [0],
'shrinking': [True],
'probability': [False],
'tol': [0.001],
'cache_size': [200],
'class_weight': [None],
'verbose': [False],
'max_iter': [10000000],
'decision_function_shape': ['ovr'],
'random_state': [None]
},
cv=5, n_jobs=5
)
if train_size <= 10000:
grid_search.fit(features, y)
else:
# If the training set is too large, subsample 10000 train
# examples
split = sklearn.model_selection.train_test_split(
features, y,
train_size=10000, random_state=0, stratify=y
)
grid_search.fit(split[0], split[2])
self.classifier = grid_search.best_estimator_
return self.classifier
def fit_encoder(self, X, y=None, save_memory=False, verbose=True):
"""
Trains the encoder unsupervisedly using the given training data.
@param X Training set.
@param y Training labels, used only for early stopping, if enabled. If
None, disables early stopping in the method.
@param save_memory If True, enables to save GPU memory by propagating
gradients after each loss term of the encoder loss, instead of
doing it after computing the whole loss.
@param verbose Enables, if True, to monitor which epoch is running in
the encoder training.
"""
# Check if the given time series have unequal lengths
varying = bool(numpy.isnan(numpy.sum(X)))
train = torch.from_numpy(X)
if self.cuda:
train = train.cuda(self.gpu)
if y is not None:
nb_classes = numpy.shape(numpy.unique(y, return_counts=True)[1])[0]
train_size = numpy.shape(X)[0]
ratio = train_size // nb_classes
X = torch.from_numpy(X).cuda(self.gpu)
train_torch_dataset = utils.Dataset(X)
train_generator = torch.utils.data.DataLoader(
train_torch_dataset, batch_size=self.batch_size, shuffle=True
)
max_score = 0
i = 0 # Number of performed optimization steps
epochs = 0 # Number of performed epochs
count = 0 # Count of number of epochs without improvement
# Will be true if, by enabling epoch_selection, a model was selected
# using cross-validation
found_best = False
# Encoder training
while i < self.nb_steps:
if verbose:
print('Epoch: ', epochs + 1)
for batch in train_generator:
'''
if self.cuda:
batch = batch.cuda(self.gpu)
'''
self.optimizer.zero_grad()
if not varying:
loss = self.loss(
batch, self.encoder, train, save_memory=save_memory
)
else:
loss = self.loss_varying(
batch, self.encoder, train, save_memory=save_memory
)
loss.backward()
self.optimizer.step()
i += 1
if i >= self.nb_steps:
break
epochs += 1
# Early stopping strategy
if self.early_stopping is not None and y is not None and (
ratio >= 5 and train_size >= 50
):
# Computes the best regularization parameters
features = self.encode(X)
self.classifier = self.fit_classifier(features, y)
# Cross validation score
score = numpy.mean(sklearn.model_selection.cross_val_score(
self.classifier, features, y=y, cv=5, n_jobs=5
))
count += 1
# If the model is better than the previous one, update
if score > max_score:
count = 0
found_best = True
max_score = score
best_encoder = type(self.encoder)(**self.params)
best_encoder.double()
if self.cuda:
best_encoder.cuda(self.gpu)
best_encoder.load_state_dict(self.encoder.state_dict())
if count == self.early_stopping:
break
# If a better model was found, use it
if found_best:
self.encoder = best_encoder
return self.encoder
def fit(self, X, y, save_memory=False, verbose=False):
"""
Trains sequentially the encoder unsupervisedly and then the classifier
using the given labels over the learned features.
@param X Training set.
@param y Training labels.
@param save_memory If True, enables to save GPU memory by propagating
gradients after each loss term of the encoder loss, instead of
doing it after computing the whole loss.
@param verbose Enables, if True, to monitor which epoch is running in
the encoder training.
"""
# Fitting encoder
self.encoder = self.fit_encoder(
X, y=y, save_memory=save_memory, verbose=verbose
)
# SVM classifier training
features = self.encode(X)
self.classifier = self.fit_classifier(features, y)
return self
def encode(self, X, batch_size=50):
"""
Outputs the representations associated to the input by the encoder.
@param X Testing set.
@param batch_size Size of batches used for splitting the test data to
avoid out of memory errors when using CUDA. Ignored if the
testing set contains time series of unequal lengths.
"""
# Check if the given time series have unequal lengths
varying = bool(numpy.isnan(numpy.sum(X)))
test = utils.Dataset(X)
test_generator = torch.utils.data.DataLoader(
test, batch_size=batch_size if not varying else 1
)
features = numpy.zeros((numpy.shape(X)[0], self.out_channels))
self.encoder = self.encoder.eval()
count = 0
with torch.no_grad():
if not varying:
for batch in test_generator:
if self.cuda:
batch = batch.cuda(self.gpu)
features[
count * batch_size: (count + 1) * batch_size
] = self.encoder(batch).cpu()
count += 1
else:
for batch in test_generator:
if self.cuda:
batch = batch.cuda(self.gpu)
length = batch.size(2) - torch.sum(
torch.isnan(batch[0, 0])
).data.cpu().numpy()
features[count: count + 1] = self.encoder(
batch[:, :, :length]
).cpu()
count += 1
self.encoder = self.encoder.train()
return features
def encode_window(self, X, window, batch_size=50, window_batch_size=10000):
"""
Outputs the representations associated to the input by the encoder,
for each subseries of the input of the given size (sliding window
representations).
@param X Testing set.
@param window Size of the sliding window.
@param batch_size Size of batches used for splitting the test data to
avoid out of memory errors when using CUDA.
@param window_batch_size Size of batches of windows to compute in a
run of encode, to save RAM.
"""
features = numpy.empty((
numpy.shape(X)[0], self.out_channels,
numpy.shape(X)[2] - window + 1
))
masking = numpy.empty((
min(window_batch_size, numpy.shape(X)[2] - window + 1),
numpy.shape(X)[1], window
))
for b in range(numpy.shape(X)[0]):
for i in range(math.ceil(
(numpy.shape(X)[2] - window + 1) / window_batch_size)
):
for j in range(
i * window_batch_size,
min(
(i + 1) * window_batch_size,
numpy.shape(X)[2] - window + 1
)
):
j0 = j - i * window_batch_size
masking[j0, :, :] = X[b, :, j: j + window]
features[
b, :, i * window_batch_size: (i + 1) * window_batch_size
] = numpy.swapaxes(
self.encode(masking[:j0 + 1], batch_size=batch_size), 0, 1
)
return features
def predict(self, X, batch_size=50):
"""
Outputs the class predictions for the given test data.
@param X Testing set.
@param batch_size Size of batches used for splitting the test data to
avoid out of memory errors when using CUDA. Ignored if the
testing set contains time series of unequal lengths.
"""
features = self.encode(X, batch_size=batch_size)
return self.classifier.predict(features)
def score(self, X, y, batch_size=50):
"""
Outputs accuracy of the SVM classifier on the given testing data.
@param X Testing set.
@param y Testing labels.
@param batch_size Size of batches used for splitting the test data to
avoid out of memory errors when using CUDA. Ignored if the
testing set contains time series of unequal lengths.
"""
features = self.encode(X, batch_size=batch_size)
return self.classifier.score(features, y)
class CausalCNNEncoderClassifier(TimeSeriesEncoderClassifier):
"""
Wraps a causal CNN encoder of time series as a PyTorch module and a
SVM classifier on top of its computed representations in a scikit-learn
class.
@param compared_length Maximum length of randomly chosen time series. If
None, this parameter is ignored.
@param nb_random_samples Number of randomly chosen intervals to select the
final negative sample in the loss.
@param negative_penalty Multiplicative coefficient for the negative sample
loss.
@param batch_size Batch size used during the training of the encoder.
@param nb_steps Number of optimization steps to perform for the training of
the encoder.
@param lr learning rate of the Adam optimizer used to train the encoder.
@param penalty Penalty term for the SVM classifier. If None and if the
number of samples is high enough, performs a hyperparameter search
to find a suitable constant.
@param early_stopping Enables, if not None, early stopping heuristic
for the training of the representations, based on the final
score. Representations are still learned unsupervisedly in this
case. If the number of samples per class is no more than 10,
disables this heuristic. If not None, accepts an integer
representing the patience of the early stopping strategy.
@param channels Number of channels manipulated in the causal CNN.
@param depth Depth of the causal CNN.
@param reduced_size Fixed length to which the output time series of the
causal CNN is reduced.
@param out_channels Number of features in the final output.
@param kernel_size Kernel size of the applied non-residual convolutions.
@param in_channels Number of input channels of the time series.
@param cuda Transfers, if True, all computations to the GPU.
@param gpu GPU index to use, if CUDA is enabled.
"""
# nb_steps=2000
def __init__(self, compared_length=50, nb_random_samples=10,
negative_penalty=1, batch_size=1, nb_steps=2000, lr=0.001,
penalty=1, early_stopping=None, channels=10, depth=1,
reduced_size=10, out_channels=10, kernel_size=4,
in_channels=1, cuda=False, gpu=0):
super(CausalCNNEncoderClassifier, self).__init__(
compared_length, nb_random_samples, negative_penalty, batch_size,
nb_steps, lr, penalty, early_stopping,
self.__create_encoder(in_channels, channels, depth, reduced_size,
out_channels, kernel_size, cuda, gpu),
self.__encoder_params(in_channels, channels, depth, reduced_size,
out_channels, kernel_size),
in_channels, out_channels, cuda, gpu
)
self.architecture = 'CausalCNN'
self.channels = channels
self.depth = depth
self.reduced_size = reduced_size
self.kernel_size = kernel_size
def __create_encoder(self, in_channels, channels, depth, reduced_size,
out_channels, kernel_size, cuda, gpu):
encoder = networks.causal_cnn.CausalCNNEncoder(
in_channels, channels, depth, reduced_size, out_channels,
kernel_size
)
encoder.double()
if cuda:
encoder.cuda(gpu)
return encoder
def __encoder_params(self, in_channels, channels, depth, reduced_size,
out_channels, kernel_size):
return {
'in_channels': in_channels,
'channels': channels,
'depth': depth,
'reduced_size': reduced_size,
'out_channels': out_channels,
'kernel_size': kernel_size
}
def encode_sequence(self, X, batch_size=50):
"""
Outputs the representations associated to the input by the encoder,
from the start of the time series to each time step (i.e., the
evolution of the representations of the input time series with
repect to time steps).
Takes advantage of the causal CNN (before the max pooling), wich
ensures that its output at time step i only depends on time step i and
previous time steps.
@param X Testing set.
@param batch_size Size of batches used for splitting the test data to
avoid out of memory errors when using CUDA. Ignored if the
testing set contains time series of unequal lengths.
"""
# Check if the given time series have unequal lengths
varying = bool(numpy.isnan(numpy.sum(X)))
test = utils.Dataset(X)
test_generator = torch.utils.data.DataLoader(
test, batch_size=batch_size if not varying else 1
)
length = numpy.shape(X)[2]
features = numpy.full(
(numpy.shape(X)[0], self.out_channels, length), numpy.nan
)
self.encoder = self.encoder.eval()
causal_cnn = self.encoder.network[0]
linear = self.encoder.network[3]
count = 0
with torch.no_grad():
if not varying:
for batch in test_generator:
if self.cuda:
batch = batch.cuda(self.gpu)
# First applies the causal CNN
output_causal_cnn = causal_cnn(batch)
after_pool = torch.empty(
output_causal_cnn.size(), dtype=torch.double
)
if self.cuda:
after_pool = after_pool.cuda(self.gpu)
after_pool[:, :, 0] = output_causal_cnn[:, :, 0]
# Then for each time step, computes the output of the max
# pooling layer
for i in range(1, length):
after_pool[:, :, i] = torch.max(
torch.cat([
after_pool[:, :, i - 1: i],
output_causal_cnn[:, :, i: i+1]
], dim=2),
dim=2
)[0]
features[
count * batch_size: (count + 1) * batch_size, :, :
] = torch.transpose(linear(
torch.transpose(after_pool, 1, 2)
), 1, 2)
count += 1
else:
for batch in test_generator:
if self.cuda:
batch = batch.cuda(self.gpu)
length = batch.size(2) - torch.sum(
torch.isnan(batch[0, 0])
).data.cpu().numpy()
output_causal_cnn = causal_cnn(batch)
after_pool = torch.empty(
output_causal_cnn.size(), dtype=torch.double
)
if self.cuda:
after_pool = after_pool.cuda(self.gpu)
after_pool[:, :, 0] = output_causal_cnn[:, :, 0]
for i in range(1, length):
after_pool[:, :, i] = torch.max(
torch.cat([
after_pool[:, :, i - 1: i],
output_causal_cnn[:, :, i: i+1]
], dim=2),
dim=2
)[0]
features[
count: count + 1, :, :
] = torch.transpose(linear(
torch.transpose(after_pool, 1, 2)
), 1, 2)
count += 1
self.encoder = self.encoder.train()
return features
def get_params(self, deep=True):
return {
'compared_length': self.loss.compared_length,
'nb_random_samples': self.loss.nb_random_samples,
'negative_penalty': self.loss.negative_penalty,
'batch_size': self.batch_size,
'nb_steps': self.nb_steps,
'lr': self.lr,
'penalty': self.penalty,
'early_stopping': self.early_stopping,
'channels': self.channels,
'depth': self.depth,
'reduced_size': self.reduced_size,
'kernel_size': self.kernel_size,
'in_channels': self.in_channels,
'out_channels': self.out_channels,
'cuda': self.cuda,
'gpu': self.gpu
}
def set_params(self, compared_length, nb_random_samples, negative_penalty,
batch_size, nb_steps, lr, penalty, early_stopping,
channels, depth, reduced_size, out_channels, kernel_size,
in_channels, cuda, gpu):
self.__init__(
compared_length, nb_random_samples, negative_penalty, batch_size,
nb_steps, lr, penalty, early_stopping, channels, depth,
reduced_size, out_channels, kernel_size, in_channels, cuda, gpu
)
return self
class LSTMEncoderClassifier(TimeSeriesEncoderClassifier):
"""
Wraps an LSTM encoder of time series as a PyTorch module and a SVM
classifier on top of its computed representations in a scikit-learn
class.
@param compared_length Maximum length of randomly chosen time series. If
None, this parameter is ignored.
@param nb_random_samples Number of randomly chosen intervals to select the
final negative sample in the loss.
@param negative_penalty Multiplicative coefficient for the negative sample
loss.
@param batch_size Batch size used during the training of the encoder.
@param nb_steps Number of optimization steps to perform for the training of
the encoder.
@param lr learning rate of the Adam optimizer used to train the encoder.
@param penalty Penalty term for the SVM classifier. If None and if the
number of samples is high enough, performs a hyperparameter search
to find a suitable constant.
@param early_stopping Enables, if not None, early stopping heuristic
for the training of the representations, based on the final
score. Representations are still learned unsupervisedly in this
case. If the number of samples per class is no more than 10,
disables this heuristic. If not None, accepts an integer
representing the patience of the early stopping strategy.
@param cuda Transfers, if True, all computations to the GPU.
@param in_channels Number of input channels of the time series.
@param gpu GPU index to use, if CUDA is enabled.
"""
def __init__(self, compared_length=50, nb_random_samples=10,
negative_penalty=1, batch_size=1, nb_steps=2000, lr=0.001,
penalty=1, early_stopping=None, in_channels=1, cuda=False,
gpu=0):
super(LSTMEncoderClassifier, self).__init__(
compared_length, nb_random_samples, negative_penalty, batch_size,
nb_steps, lr, penalty, early_stopping,
self.__create_encoder(cuda, gpu), {}, in_channels, 160, cuda, gpu
)
assert in_channels == 1
self.architecture = 'LSTM'
def __create_encoder(self, cuda, gpu):
encoder = networks.lstm.LSTMEncoder()
encoder.double()
if cuda:
encoder.cuda(gpu)
return encoder
def get_params(self, deep=True):
return {
'compared_length': self.loss.compared_length,
'nb_random_samples': self.loss.nb_random_samples,
'negative_penalty': self.loss.negative_penalty,
'batch_size': self.batch_size,
'nb_steps': self.nb_steps,
'lr': self.lr,
'penalty': self.penalty,
'early_stopping': self.early_stopping,
'in_channels': self.in_channels,
'cuda': self.cuda,
'gpu': self.gpu
}
def set_params(self, compared_length, nb_random_samples, negative_penalty,
batch_size, nb_steps, lr, penalty, early_stopping,
in_channels, cuda, gpu):
self.__init__(
compared_length, nb_random_samples, negative_penalty, batch_size,
nb_steps, lr, penalty, early_stopping, in_channels, cuda, gpu
)
return self
================================================
FILE: ts_classification_methods/tloss_cls/scripts/ucr.sh
================================================
python ucr.py --dataset AllGestureWiimoteY --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset UWaveGestureLibraryX --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset DiatomSizeReduction --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset FreezerSmallTrain --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ScreenType --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset MixedShapesSmallTrain --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset SonyAIBORobotSurface2 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset LargeKitchenAppliances --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ProximalPhalanxOutlineCorrect --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset OSULeaf --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset OliveOil --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset FreezerRegularTrain --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Herring --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset GesturePebbleZ1 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset MelbournePedestrian --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset PhalangesOutlinesCorrect --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset CricketZ --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ACSF1 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset FaceFour --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset SemgHandGenderCh2 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Haptics --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset UWaveGestureLibraryY --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Coffee --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset TwoLeadECG --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset DistalPhalanxOutlineAgeGroup --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset MixedShapesRegularTrain --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset SmoothSubspace --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Meat --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ShapesAll --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset InsectEPGSmallTrain --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset CinCECGTorso --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset BeetleFly --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Ham --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ProximalPhalanxTW --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ItalyPowerDemand --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset GunPointMaleVersusFemale --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset SonyAIBORobotSurface1 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset MedicalImages --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset SmallKitchenAppliances --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset PigCVP --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Crop --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Chinatown --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset PLAID --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset RefrigerationDevices --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Wine --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Yoga --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset AllGestureWiimoteX --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset DistalPhalanxTW --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Computers --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ElectricDevices --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Adiac --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset InlineSkate --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset FacesUCR --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ShapeletSim --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset GunPointAgeSpan --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Phoneme --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset CricketX --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Lightning2 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Beef --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset PowerCons --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Plane --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset NonInvasiveFetalECGThorax2 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset UMD --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Wafer --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ToeSegmentation1 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Car --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset UWaveGestureLibraryZ --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset EOGVerticalSignal --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset CBF --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset EOGHorizontalSignal --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Strawberry --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset StarLightCurves --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset DodgerLoopGame --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset FordA --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Fish --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset PigArtPressure --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ShakeGestureWiimoteZ --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ECGFiveDays --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset GunPointOldVersusYoung --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset GesturePebbleZ2 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ECG200 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Symbols --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset FordB --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset FaceAll --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset MiddlePhalanxTW --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset MiddlePhalanxOutlineCorrect --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset GestureMidAirD1 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset InsectEPGRegularTrain --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset DodgerLoopDay --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ProximalPhalanxOutlineAgeGroup --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset HandOutlines --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset SwedishLeaf --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset AllGestureWiimoteZ --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset InsectWingbeatSound --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset MiddlePhalanxOutlineAgeGroup --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset GestureMidAirD3 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ChlorineConcentration --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ArrowHead --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Fungi --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset PigAirwayPressure --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset PickupGestureWiimoteZ --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Rock --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Worms --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Lightning7 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset BME --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset SyntheticControl --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset MoteStrain --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset SemgHandMovementCh2 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Mallat --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset GestureMidAirD2 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset CricketY --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset NonInvasiveFetalECGThorax1 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ToeSegmentation2 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset ECG5000 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Trace --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset WormsTwoClass --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset GunPoint --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset UWaveGestureLibraryAll --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset EthanolLevel --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset WordSynonyms --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset HouseTwenty --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset DodgerLoopWeekend --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset Earthquakes --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset TwoPatterns --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset DistalPhalanxOutlineCorrect --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset SemgHandSubjectCh2 --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset FiftyWords --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
python ucr.py --dataset BirdChicken --path /dev_data/zzj/hzy/datasets/UCR --hyper ./default_hyperparameters.json --cuda --random_seed 42
================================================
FILE: ts_classification_methods/tloss_cls/scripts/uea.sh
================================================
python uea.py --dataset CharacterTrajectories --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset PenDigits --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset InsectWingbeat --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset ArticularyWordRecognition --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset Heartbeat --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset Handwriting --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset AtrialFibrillation --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset PhonemeSpectra --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset BasicMotions --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset SelfRegulationSCP1 --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset JapaneseVowels --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset UWaveGestureLibrary --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset Libras --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset EthanolConcentration --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset RacketSports --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset NATOPS --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset StandWalkJump --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset ERing --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset HandMovementDirection --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset SelfRegulationSCP2 --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset Epilepsy --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset FaceDetection --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset PEMS-SF --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset Cricket --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset SpokenArabicDigits --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset DuckDuckGeese --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset FingerMovements --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset EigenWorms --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset MotorImagery --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
python uea.py --dataset LSST --path /dev_data/zzj/hzy/datasets/UEA --hyper ./default_hyperparameters.json --cuda --random_seed 42
================================================
FILE: ts_classification_methods/tloss_cls/transfer_ucr.py
================================================
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from copyreg import pickle
import os
import json
import torch
import argparse
import pickle
import ucr
import scikit_wrappers
def parse_arguments():
parser = argparse.ArgumentParser(
description='Uses the learned representations for a dataset to ' +
'learn classifiers for all other UCR datasets'
)
parser.add_argument('--path', type=str, metavar='PATH', required=True,
help='path where the UCR datasets are located')
parser.add_argument('--save_path', type=str, metavar='PATH', required=True,
help='path where the encoder is saved')
parser.add_argument('--dataset', type=str, metavar='D', required=True,
help='dataset name')
parser.add_argument('--cuda', action='store_true',
help='activate to use CUDA')
parser.add_argument('--gpu', type=int, default=0, metavar='GPU',
help='index of GPU used for computations (default: 0)')
return parser.parse_args()
if __name__ == '__main__':
args = parse_arguments()
if args.cuda and not torch.cuda.is_available():
print("CUDA is not available, proceeding without it...")
args.cuda = False
classifier = scikit_wrappers.CausalCNNEncoderClassifier()
hf = open(
os.path.join(args.save_path, args.dataset + '_hyperparameters.json'),
'r'
)
hp_dict = json.load(hf)
hf.close()
hp_dict['cuda'] = args.cuda
hp_dict['gpu'] = args.gpu
classifier.set_params(**hp_dict)
classifier.load(os.path.join(args.save_path, args.dataset))
print("Classification tasks...")
results = {}
# List of folders / datasets in the given path
datasets = [x[0][len(args.path) + 1:] for x in os.walk(args.path)][1:]
for dataset in datasets:
train, train_labels, test, test_labels = ucr.load_UCR_dataset(
args.path, dataset
)
classifier.fit_classifier(classifier.encode(train), train_labels)
print(
dataset,
"Test accuracy: " + str(classifier.score(test, test_labels))
)
results[dataset] = classifier.score(test, test_labels)
with open('/dev_data/zzj/hzy/pretrained_model/Tloss/results/accu/CricketX_result.pkl', 'ab') as f:
pickle.dump(results, f)
================================================
FILE: ts_classification_methods/tloss_cls/ucr.py
================================================
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from tsm_utils import set_seed
from data.preprocessing import *
import os
import json
import math
import torch
import numpy
import pandas
import argparse
import pickle
import scikit_wrappers
import sklearn
import sys
sys.path.append('..')
def load_UCR_dataset(path, dataset):
"""
Loads the UCR dataset given in input in numpy arrays.
@param path Path where the UCR dataset is located.
@param dataset Name of the UCR dataset.
@return Quadruplet containing the training set, the corresponding training
labels, the testing set and the corresponding testing labels.
"""
train_file = os.path.join(path, dataset, dataset + "_TRAIN.tsv")
test_file = os.path.join(path, dataset, dataset + "_TEST.tsv")
train_df = pandas.read_csv(train_file, sep='\t', header=None)
test_df = pandas.read_csv(test_file, sep='\t', header=None)
train_array = numpy.array(train_df)
test_array = numpy.array(test_df)
# Move the labels to {0, ..., L-1}
labels = numpy.unique(train_array[:, 0])
transform = {}
for i, l in enumerate(labels):
transform[l] = i
train = numpy.expand_dims(train_array[:, 1:], 1).astype(numpy.float64)
train_labels = numpy.vectorize(transform.get)(train_array[:, 0])
test = numpy.expand_dims(test_array[:, 1:], 1).astype(numpy.float64)
test_labels = numpy.vectorize(transform.get)(test_array[:, 0])
# Normalization for non-normalized datasets
# To keep the amplitude information, we do not normalize values over
# individual time series, but on the whole dataset
if dataset not in [
'AllGestureWiimoteX',
'AllGestureWiimoteY',
'AllGestureWiimoteZ',
'BME',
'Chinatown',
'Crop',
'EOGHorizontalSignal',
'EOGVerticalSignal',
'Fungi',
'GestureMidAirD1',
'GestureMidAirD2',
'GestureMidAirD3',
'GesturePebbleZ1',
'GesturePebbleZ2',
'GunPointAgeSpan',
'GunPointMaleVersusFemale',
'GunPointOldVersusYoung',
'HouseTwenty',
'InsectEPGRegularTrain',
'InsectEPGSmallTrain',
'MelbournePedestrian',
'PickupGestureWiimoteZ',
'PigAirwayPressure',
'PigArtPressure',
'PigCVP',
'PLAID',
'PowerCons',
'Rock',
'SemgHandGenderCh2',
'SemgHandMovementCh2',
'SemgHandSubjectCh2',
'ShakeGestureWiimoteZ',
'SmoothSubspace',
'UMD'
]:
return train, train_labels, test, test_labels
# Post-publication note:
# Using the testing set to normalize might bias the learned network,
# but with a limited impact on the reported results on few datasets.
# See the related discussion here: https://github.com/White-Link/UnsupervisedScalableRepresentationLearningTimeSeries/pull/13.
mean = numpy.nanmean(numpy.concatenate([train, test]))
var = numpy.nanvar(numpy.concatenate([train, test]))
train = (train - mean) / math.sqrt(var)
test = (test - mean) / math.sqrt(var)
return train, train_labels, test, test_labels
def fit_hyperparameters(file, train, train_labels, cuda, gpu,
save_memory=False):
"""
Creates a classifier from the given set of hyperparameters in the input
file, fits it and return it.
@param file Path of a file containing a set of hyperparemeters.
@param train Training set.
@param train_labels Labels for the training set.
@param cuda If True, enables computations on the GPU.
@param gpu GPU to use if CUDA is enabled.
@param save_memory If True, save GPU memory by propagating gradients after
each loss term, instead of doing it after computing the whole loss.
"""
classifier = scikit_wrappers.CausalCNNEncoderClassifier()
# Loads a given set of hyperparameters and fits a model with those
hf = open(os.path.join(file), 'r')
params = json.load(hf)
hf.close()
# Check the number of input channels
params['in_channels'] = numpy.shape(train)[1]
params['cuda'] = cuda
params['gpu'] = gpu
classifier.set_params(**params)
return classifier.fit(
train, train_labels, save_memory=save_memory, verbose=True
)
def parse_arguments():
parser = argparse.ArgumentParser(
description='Classification tests for UCR repository datasets'
)
parser.add_argument('--dataset', type=str, metavar='D', required=True,
help='dataset name')
parser.add_argument('--path', type=str, metavar='PATH', required=True,
help='path where the dataset is located')
parser.add_argument('--save_path', type=str, metavar='PATH',
help='path where the estimator is/should be saved')
parser.add_argument('--cuda', action='store_true',
help='activate to use CUDA')
parser.add_argument('--gpu', type=int, default=0, metavar='GPU',
help='index of GPU used for computations (default: 0)')
parser.add_argument('--hyper', type=str, metavar='FILE', required=True,
help='path of the file of hyperparameters to use; ' +
'for training; must be a JSON file')
parser.add_argument('--load', action='store_true', default=False,
help='activate to load the estimator instead of ' +
'training it')
parser.add_argument('--fit_classifier', action='store_true', default=False,
help='if not supervised, activate to load the ' +
'model and retrain the classifier')
parser.add_argument('--random_seed', type=int, default=42)
return parser.parse_args()
if __name__ == '__main__':
args = parse_arguments()
if args.cuda and not torch.cuda.is_available():
print("CUDA is not available, proceeding without it...")
args.cuda = False
'''
train, train_labels, test, test_labels = load_UCR_dataset(
args.path, args.dataset
)
'''
# set seed
set_seed(args)
sum_dataset, sum_target, num_classes = load_data(args.path, args.dataset)
'''
sum_dataset = normalize_per_series(sum_dataset)
sum_dataset = numpy.expand_dims(sum_dataset, 1).astype(numpy.float64)
'''
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = k_fold(
sum_dataset, sum_target)
accuracies = []
for i, train_dataset in enumerate(train_datasets):
print('{} fold start training!'.format(i+1))
train_target = train_targets[i]
val_target = val_targets[i]
val_dataset = val_datasets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
train_dataset, _, test_dataset = fill_nan_value(
train_dataset, val_dataset, test_dataset)
train_dataset, test_dataset = normalize_per_series(
train_dataset), normalize_per_series(test_dataset)
train_dataset = numpy.concatenate((train_dataset, val_dataset))
train_target = numpy.concatenate((train_target, val_target))
train_dataset, test_dataset = numpy.expand_dims(train_dataset, 1).astype(
numpy.float64), numpy.expand_dims(test_dataset, 1).astype(numpy.float64)
if not args.load and not args.fit_classifier:
classifier = fit_hyperparameters(
args.hyper, train_dataset, train_target, args.cuda, args.gpu
)
else:
classifier = scikit_wrappers.CausalCNNEncoderClassifier()
hf = open(
os.path.join(
args.save_path, args.dataset + '_hyperparameters.json'
), 'r'
)
hp_dict = json.load(hf)
hf.close()
hp_dict['cuda'] = args.cuda
hp_dict['gpu'] = args.gpu
classifier.set_params(**hp_dict)
classifier.load(os.path.join(args.save_path, args.dataset))
if not args.load:
if args.fit_classifier:
classifier.fit_classifier(
classifier.encode(train_dataset), train_target)
'''
classifier.save(
os.path.join(args.save_path, args.dataset)
)
with open(
os.path.join(
args.save_path, args.dataset + '_hyperparameters.json'
), 'w'
) as fp:
json.dump(classifier.get_params(), fp)
'''
print("Test accuracy: " + str(classifier.score(test_dataset, test_target)))
accuracies.append(classifier.score(test_dataset, test_target))
accuracies = numpy.array(accuracies)
if os.path.exists('./tloss_result.csv'):
result_form = pd.read_csv('./tloss_result.csv')
else:
result_form = pd.DataFrame(columns=['target', 'accuracy', 'std'])
result_form = result_form.append({'target': args.dataset, 'accuracy': '%.4f' % numpy.mean(
accuracies), 'std': '%.4f' % numpy.std(accuracies)}, ignore_index=True)
result_form = result_form.iloc[:, -3:]
result_form.to_csv('./tloss_result.csv')
================================================
FILE: ts_classification_methods/tloss_cls/uea.py
================================================
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from tsm_utils import set_seed
from data import load_UEA, UEADataset, k_fold, fill_nan_value, normalize_per_series
import pandas as pd
import os
import json
import math
import torch
import numpy
import argparse
'''
import weka.core.jvm
import weka.core.converters
'''
import time
import scikit_wrappers
import sys
sys.path.append('..')
sys.path.remove('..')
def fit_hyperparameters(file, train, train_labels, cuda, gpu,
save_memory=False):
"""
Creates a classifier from the given set of hyperparameters in the input
file, fits it and return it.
@param file Path of a file containing a set of hyperparemeters.
@param train Training set.
@param train_labels Labels for the training set.
@param cuda If True, enables computations on the GPU.
@param gpu GPU to use if CUDA is enabled.
@param save_memory If True, save GPU memory by propagating gradients after
each loss term, instead of doing it after computing the whole loss.
"""
classifier = scikit_wrappers.CausalCNNEncoderClassifier()
# Loads a given set of hyperparameters and fits a model with those
hf = open(os.path.join(file), 'r')
params = json.load(hf)
hf.close()
# Check the number of input channels
params['in_channels'] = numpy.shape(train)[1]
params['cuda'] = cuda
params['gpu'] = gpu
classifier.set_params(**params)
return classifier.fit(
train, train_labels, save_memory=save_memory, verbose=True
)
def parse_arguments():
parser = argparse.ArgumentParser(
description='Classification tests for UEA repository datasets'
)
parser.add_argument('--dataset', type=str, metavar='D', required=True,
help='dataset name')
parser.add_argument('--path', type=str, metavar='PATH', required=True,
help='path where the dataset is located')
parser.add_argument('--save_path', type=str, metavar='PATH', required=False,
help='path where the estimator is/should be saved')
parser.add_argument('--cuda', action='store_true',
help='activate to use CUDA')
parser.add_argument('--gpu', type=int, default=0, metavar='GPU',
help='index of GPU used for computations (default: 0)')
parser.add_argument('--hyper', type=str, metavar='FILE', required=True,
help='path of the file of hyperparameters to use ' +
'for training; must be a JSON file')
parser.add_argument('--load', action='store_true', default=False,
help='activate to load the estimator instead of ' +
'training it')
parser.add_argument('--fit_classifier', action='store_true', default=False,
help='if not supervised, activate to load the ' +
'model and retrain the classifier')
parser.add_argument('--random_seed', type=int, default=42)
return parser.parse_args()
if __name__ == '__main__':
args = parse_arguments()
if args.cuda and not torch.cuda.is_available():
print("CUDA is not available, proceeding without it...")
args.cuda = False
set_seed(args)
sum_dataset, sum_target, num_classes = load_UEA(args.path, args.dataset)
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = k_fold(
sum_dataset, sum_target)
accuracies = []
times = []
for i, train_dataset in enumerate(train_datasets):
start = time.time()
print('{} fold start training!'.format(i+1))
train_target = train_targets[i]
val_target = val_targets[i]
val_dataset = val_datasets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
train_dataset, _, test_dataset = fill_nan_value(
train_dataset, val_dataset, test_dataset)
train_dataset, test_dataset = normalize_per_series(
train_dataset), normalize_per_series(test_dataset)
train_dataset = numpy.concatenate((train_dataset, val_dataset))
train_target = numpy.concatenate((train_target, val_target))
if not args.load and not args.fit_classifier:
classifier = fit_hyperparameters(
args.hyper, train_dataset, train_target, args.cuda, args.gpu,
save_memory=True
)
else:
classifier = scikit_wrappers.CausalCNNEncoderClassifier()
hf = open(
os.path.join(
args.save_path, args.dataset + '_hyperparameters.json'
), 'r'
)
hp_dict = json.load(hf)
hf.close()
hp_dict['cuda'] = args.cuda
hp_dict['gpu'] = args.gpu
classifier.set_params(**hp_dict)
classifier.load(os.path.join(args.save_path, args.dataset))
if not args.load:
if args.fit_classifier:
classifier.fit_classifier(
classifier.encode(train_dataset), train_target)
accu = classifier.score(test_dataset, test_target)
print("Test accuracy: " + str(accu))
end = time.time()
times.append(end-start)
accuracies.append(accu)
accuracies = numpy.array(accuracies)
times = numpy.array(times)
if os.path.exists('./tloss_uea.csv'):
result_form = pd.read_csv('./tloss_uea.csv')
else:
result_form = pd.DataFrame(
columns=['target', 'accuracy', 'std', 'times'])
result_form = result_form.append({'target': args.dataset, 'accuracy': '%.4f' % numpy.mean(
accuracies), 'std': '%.4f' % numpy.std(accuracies), 'times': '%.4f' % numpy.mean(times)}, ignore_index=True)
result_form = result_form.iloc[:, -4:]
result_form.to_csv('./tloss_uea.csv')
================================================
FILE: ts_classification_methods/tloss_cls/utils.py
================================================
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import numpy
import torch.utils.data
class Dataset(torch.utils.data.Dataset):
"""
PyTorch wrapper for a numpy dataset.
@param dataset Numpy array representing the dataset.
"""
def __init__(self, dataset):
self.dataset = dataset
def __len__(self):
return numpy.shape(self.dataset)[0]
def __getitem__(self, index):
return self.dataset[index]
class LabelledDataset(torch.utils.data.Dataset):
"""
PyTorch wrapper for a numpy dataset and its associated labels.
@param dataset Numpy array representing the dataset.
@param labels One-dimensional array of the same length as dataset with
non-negative int values.
"""
def __init__(self, dataset, labels):
self.dataset = dataset
self.labels = labels
def __len__(self):
return numpy.shape(self.dataset)[0]
def __getitem__(self, index):
return self.dataset[index], self.labels[index]
================================================
FILE: ts_classification_methods/train.py
================================================
import argparse
import os
import time
import numpy as np
import torch
from torch.utils.data import DataLoader
from data.dataloader import UCRDataset, UEADataset
from data.preprocessing import normalize_per_series, fill_nan_value, normalize_train_val_test, load_UEA, normalize_uea_set
from tsm_utils import build_model, set_seed, build_dataset, build_loss, evaluate, get_all_datasets, save_cls_result
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Base setup
parser.add_argument('--backbone', type=str, default='fcn', help='encoder backbone, fcn or dilated')
parser.add_argument('--task', type=str, default='classification', help='classification or reconstruction')
parser.add_argument('--random_seed', type=int, default=42, help='shuffle seed')
# Dataset setup
parser.add_argument('--dataset', type=str, default=None, help='dataset (in ucr or uea)')
parser.add_argument('--is_uea', type=bool, default=False, help='True or False')
parser.add_argument('--dataroot', type=str, default=None, help='path of UCR/UEA folder')
parser.add_argument('--num_classes', type=int, default=0, help='number of class')
parser.add_argument('--normalize_way', type=str, default='single', help='single or train_set')
parser.add_argument('--seq_len', type=int, default=46, help='seq_len')
parser.add_argument('--input_size', type=int, default=1, help='input_size')
# Dilated Convolution setup
parser.add_argument('--depth', type=int, default=3, help='depth of the dilated conv model')
parser.add_argument('--in_channels', type=int, default=1, help='input data channel')
parser.add_argument('--embedding_channels', type=int, default=40, help='mid layer channel')
parser.add_argument('--reduced_size', type=int, default=160, help='number of channels after Global max Pool')
parser.add_argument('--out_channels', type=int, default=320, help='number of channels after linear layer')
parser.add_argument('--kernel_size', type=int, default=3, help='convolution kernel size')
# training setup
parser.add_argument('--loss', type=str, default='cross_entropy', help='loss function')
parser.add_argument('--optimizer', type=str, default='adam', help='optimizer')
parser.add_argument('--lr', type=float, default=0.001, help='learning rate')
parser.add_argument('--weight_decay', type=float, default=0.0, help='weight decay')
parser.add_argument('--batch_size', type=int, default=128, help='(16, 128) larger batch size on the big dataset, ')
parser.add_argument('--epoch', type=int, default=1000, help='training epoch')
parser.add_argument('--mode', type=str, default='pretrain', help='train mode, default pretrain')
parser.add_argument('--save_dir', type=str, default='/SSD/lz/time_tsm/result_tsm_lin')
parser.add_argument('--save_csv_name', type=str, default='ex1_test_fcncls_0530_')
parser.add_argument('--continue_training', type=int, default=0, help='continue training')
parser.add_argument('--cuda', type=str, default='cuda:1')
# Decoder setup
parser.add_argument('--decoder_backbone', type=str, default='rnn', help='backbone of the decoder (rnn or fcn)')
# classifier setup
parser.add_argument('--classifier', type=str, default='linear', help='type of classifier')
parser.add_argument('--classifier_input', type=int, default=128, help='input dim of the classifiers')
parser.add_argument('--classifier_embedding', type=int, default=128,
help='embedding dim of the non linear classifier')
# fintune setup
parser.add_argument('--source_dataset', type=str, default=None, help='source dataset of the pretrained model')
parser.add_argument('--transfer_strategy', type=str, default='classification', help='classification or reconstruction')
# parser.add_argument('--direct_train')
args = parser.parse_args()
device = torch.device(args.cuda if torch.cuda.is_available() else "cpu")
set_seed(args)
if args.is_uea:
sum_dataset, sum_target, num_classes = load_UEA(args.dataroot, args.dataset)
args.input_size = sum_dataset.shape[2]
args.in_channels = sum_dataset.shape[2]
else:
sum_dataset, sum_target, num_classes = build_dataset(args)
args.num_classes = num_classes
args.seq_len = sum_dataset.shape[1]
# print("test: sum_dataset.shape = ", sum_dataset.shape)
if sum_dataset.shape[0] * 0.6 < args.batch_size:
args.batch_size = 16
model, classifier = build_model(args)
model, classifier = model.to(device), classifier.to(device)
loss = build_loss(args).to(device)
model_init_state = model.state_dict()
classifier_init_state = classifier.state_dict()
if args.optimizer == 'adam':
optimizer = torch.optim.Adam([{'params': model.parameters()}, {'params': classifier.parameters()}],
lr=args.lr, weight_decay=args.weight_decay)
elif args.optimizer == 'sgd':
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
if args.mode == 'pretrain' and args.task == 'classification':
if not os.path.exists(args.save_dir):
os.mkdir(args.save_dir)
if not os.path.exists(os.path.join(args.save_dir, args.dataset)):
os.mkdir(os.path.join(args.save_dir, args.dataset))
if args.continue_training != 0:
model.load_state_dict(torch.load(os.path.join(args.save_dir, args.dataset, 'pretrain_weights.pt')))
classifier.load_state_dict(torch.load(os.path.join(args.save_dir, args.dataset, 'classifier_weights.pt')))
print('{} started pretrain'.format(args.dataset))
if args.normalize_way == 'single':
# TODO normalize per series
sum_dataset = normalize_per_series(sum_dataset)
else:
sum_dataset, _, _ = normalize_train_val_test(sum_dataset, sum_dataset,
sum_dataset)
train_set = UCRDataset(torch.from_numpy(sum_dataset).to(device),
torch.from_numpy(sum_target).to(device).to(torch.int64))
train_loader = DataLoader(train_set, batch_size=args.batch_size, num_workers=0)
last_loss = float('inf')
stop_count = 0
increase_count = 0
min_loss = float('inf')
min_epoch = 0
model_to_save = None
num_steps = train_set.__len__() // args.batch_size
for epoch in range(args.epoch - args.continue_training):
if stop_count == 50 or increase_count == 50:
print("model convergent at epoch {}, early stopping.".format(epoch))
break
model.train()
classifier.train()
epoch_loss = 0
epoch_accu = 0
for x, y in train_loader:
optimizer.zero_grad()
pred = model(x)
pred = classifier(pred)
step_loss = loss(pred, y)
step_loss.backward()
optimizer.step()
epoch_loss += step_loss.item()
epoch_accu += torch.sum(torch.argmax(pred.data, axis=1) == y) / len(y)
epoch_loss /= num_steps
if abs(epoch_loss - last_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if epoch_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = epoch_loss
if epoch_loss < min_loss:
min_loss = epoch_loss
min_epoch = epoch
model_to_save = model.state_dict()
classifier_to_save = classifier.state_dict()
epoch_accu /= num_steps
if epoch % 100 == 0:
print("epoch : {}, loss : {}, accuracy : {}".format(epoch, epoch_loss, epoch_accu))
torch.save(model_to_save, os.path.join(args.save_dir, args.dataset, 'pretrain_weights.pt'))
torch.save(classifier_to_save, os.path.join(args.save_dir, args.dataset, 'classifier_weights.pt'))
print('{} finished pretrain, with min loss {} at epoch {}'.format(args.dataset, min_loss, min_epoch))
torch.save(model_to_save, os.path.join(args.save_dir, args.dataset, 'pretrain_weights.pt'))
if args.mode == 'pretrain' and args.task == 'reconstruction':
if not os.path.exists(args.save_dir):
os.mkdir(args.save_dir)
if not os.path.exists(os.path.join(args.save_dir, args.dataset)):
os.mkdir(os.path.join(args.save_dir, args.dataset))
print('start reconstruction on {}'.format(args.dataset))
if args.normalize_way == 'single':
# TODO normalize per series
sum_dataset = normalize_per_series(sum_dataset)
else:
sum_dataset, _, _ = normalize_train_val_test(sum_dataset, sum_dataset,
sum_dataset)
train_set = UCRDataset(torch.from_numpy(sum_dataset).to(device), torch.from_numpy(sum_target))
train_loader = DataLoader(train_set, batch_size=args.batch_size, num_workers=0)
num_steps = train_set.__len__() // args.batch_size
last_loss = 0
stop_count = 0
min_loss = float('inf')
increase_count = 0
model_to_save = None
for epoch in range(args.epoch):
if stop_count == 50 or increase_count == 50:
print("model convergent at epoch {}, early stopping.".format(epoch))
break
model.train()
classifier.train()
epoch_loss = 0
for i, (x, _) in enumerate(train_loader):
# x -> (batch_size, sequence length)
# x_features -> (batch_size, out_channels)
# x_reversed -> (batch_size, sequence length), (xt, xt-1. ..., x1)
optimizer.zero_grad()
x_features = model(x)
if args.decoder_backbone == 'fcn':
out_x = classifier(x_features)
step_loss = loss(x, out_x)
epoch_loss = epoch_loss + step_loss.item()
step_loss.backward()
optimizer.step()
else:
x_reversed = torch.fliplr(x)
# x_reversed -> (batch_size, sequence length, 1)
time_length = x.shape[1]
out = x_reversed[:, :, 0]
hidden1 = x_features
hidden2 = x_features
hidden3 = x_features
step_loss = 0
for i in range(time_length):
hidden1, hidden2, hidden3, out = classifier(hidden1, hidden2, hidden3, out)
step_loss += loss(out, x_reversed[:, :, i])
step_loss /= time_length
epoch_loss = epoch_loss + step_loss.item()
step_loss.backward()
optimizer.step()
epoch_loss /= num_steps
if epoch % 100 == 0:
print("epoch : {}, loss : {}".format(epoch, epoch_loss))
if epoch_loss < min_loss:
model_to_save = model.state_dict()
min_loss = epoch_loss
# early stopping judge
if abs(epoch_loss - last_loss) < 1e-6:
stop_count += 1
else:
stop_count = 0
if epoch_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = epoch_loss
print('{} finished pretrain, with min loss {} '.format(args.dataset, min_loss))
save_name = args.decoder_backbone + '_reconstruction_' + 'pretrain_weights.pt'
torch.save(model_to_save, os.path.join(args.save_dir, args.dataset, save_name))
if args.mode == 'finetune':
print('start finetune on {}'.format(args.dataset))
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
losses = []
test_accuracies = []
train_time = 0.0
end_val_epochs = []
for i, train_dataset in enumerate(train_datasets):
t = time.time()
if args.transfer_strategy == 'classification':
model.load_state_dict(
torch.load(os.path.join(args.save_dir, args.source_dataset, 'pretrain_weights.pt')))
else:
if args.decoder_backbone == 'fcn':
model.load_state_dict(
torch.load(
os.path.join(args.save_dir, args.source_dataset, 'fcn_reconstruction_pretrain_weights.pt')))
else:
model.load_state_dict(
torch.load(
os.path.join(args.save_dir, args.source_dataset, 'rnn_reconstruction_pretrain_weights.pt')))
classifier.load_state_dict(classifier_init_state)
print('{} fold start training and evaluate'.format(i))
max_accuracy = 0
train_target = train_targets[i]
val_dataset = val_datasets[i]
val_target = val_targets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
if args.normalize_way == 'single':
# TODO normalize per series
train_dataset = normalize_per_series(train_dataset)
val_dataset = normalize_per_series(val_dataset)
test_dataset = normalize_per_series(test_dataset)
else:
train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
test_dataset)
train_set = UCRDataset(torch.from_numpy(train_dataset).to(device),
torch.from_numpy(train_target).to(device).to(torch.int64))
val_set = UCRDataset(torch.from_numpy(val_dataset).to(device),
torch.from_numpy(val_target).to(device).to(torch.int64))
test_set = UCRDataset(torch.from_numpy(test_dataset).to(device),
torch.from_numpy(test_target).to(device).to(torch.int64))
train_loader = DataLoader(train_set, batch_size=args.batch_size, num_workers=0, drop_last=True)
val_loader = DataLoader(val_set, batch_size=args.batch_size, num_workers=0)
test_loader = DataLoader(test_set, batch_size=args.batch_size, num_workers=0)
train_loss = []
train_accuracy = []
num_steps = args.epoch // args.batch_size
last_loss = float('inf')
stop_count = 0
increase_count = 0
test_accuracy = 0
min_val_loss = float('inf')
end_val_epoch = 0
num_steps = train_set.__len__() // args.batch_size
for epoch in range(args.epoch):
# early stopping in finetune
if stop_count == 50 or increase_count == 50:
print('model convergent at epoch {}, early stopping'.format(epoch))
break
epoch_train_loss = 0
epoch_train_acc = 0
model.train()
classifier.train()
for x, y in train_loader:
optimizer.zero_grad()
pred = model(x)
pred = classifier(pred)
step_loss = loss(pred, y)
step_loss.backward()
optimizer.step()
epoch_train_loss += step_loss.item()
epoch_train_acc += torch.sum(torch.argmax(pred.data, axis=1) == y) / len(y)
epoch_train_loss /= num_steps
epoch_train_acc /= num_steps
model.eval()
classifier.eval()
val_loss, val_accu = evaluate(val_loader, model, classifier, loss, device)
if min_val_loss > val_loss:
min_val_loss = val_loss
end_val_epoch = epoch
test_loss, test_accuracy = evaluate(test_loader, model, classifier, loss, device)
if epoch % 100 == 0:
print(
"epoch : {}, train loss: {} , train accuracy : {}, \nval loss : {}, val accuracy : {}, \ntest loss : {}, test accuracy : {}".format(
epoch, epoch_train_loss, epoch_train_acc, val_loss, val_accu, test_loss, test_accuracy))
if abs(last_loss - val_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if val_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = val_loss
test_accuracies.append(test_accuracy)
end_val_epochs.append(end_val_epoch)
t = time.time() - t
train_time += t
print('{} fold finish training'.format(i))
test_accuracies = torch.Tensor(test_accuracies)
end_val_epochs = np.array(end_val_epochs)
save_cls_result(args, test_accu=torch.mean(test_accuracies), test_std=torch.std(test_accuracies),
train_time=train_time / 5, end_val_epoch=np.mean(end_val_epochs))
print('Done!')
if args.mode == 'directly_cls':
print('start finetune on {}'.format(args.dataset))
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
losses = []
test_accuracies = []
train_time = 0.0
end_val_epochs = []
for i, train_dataset in enumerate(train_datasets):
t = time.time()
model.load_state_dict(model_init_state)
classifier.load_state_dict(classifier_init_state)
print('{} fold start training and evaluate'.format(i))
train_target = train_targets[i]
val_dataset = val_datasets[i]
val_target = val_targets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
if test_dataset.shape[0] < args.batch_size:
args.batch_size = args.batch_size // 2
if args.normalize_way == 'single':
# TODO normalize per series
if args.is_uea:
train_dataset = normalize_uea_set(train_dataset)
val_dataset = normalize_uea_set(val_dataset)
test_dataset = normalize_uea_set(test_dataset)
else:
train_dataset = normalize_per_series(train_dataset)
val_dataset = normalize_per_series(val_dataset)
test_dataset = normalize_per_series(test_dataset)
else:
train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
test_dataset)
if args.is_uea:
train_set = UEADataset(torch.from_numpy(train_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(train_target).type(torch.FloatTensor).to(device).to(
torch.int64))
val_set = UEADataset(torch.from_numpy(val_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(val_target).type(torch.FloatTensor).to(device).to(torch.int64))
test_set = UEADataset(torch.from_numpy(test_dataset).type(torch.FloatTensor).to(device),
torch.from_numpy(test_target).type(torch.FloatTensor).to(device).to(torch.int64))
else:
train_set = UCRDataset(torch.from_numpy(train_dataset).to(device),
torch.from_numpy(train_target).to(device).to(torch.int64))
val_set = UCRDataset(torch.from_numpy(val_dataset).to(device),
torch.from_numpy(val_target).to(device).to(torch.int64))
test_set = UCRDataset(torch.from_numpy(test_dataset).to(device),
torch.from_numpy(test_target).to(device).to(torch.int64))
train_loader = DataLoader(train_set, batch_size=args.batch_size, num_workers=0, drop_last=True)
val_loader = DataLoader(val_set, batch_size=args.batch_size, num_workers=0)
test_loader = DataLoader(test_set, batch_size=args.batch_size, num_workers=0)
train_loss = []
train_accuracy = []
num_steps = args.epoch // args.batch_size
last_loss = float('inf')
stop_count = 0
increase_count = 0
test_accuracy = 0
min_val_loss = float('inf')
end_val_epoch = 0
num_steps = train_set.__len__() // args.batch_size
# print("test, args.batch_size = ", args.batch_size, ", num_steps = ", num_steps)
for epoch in range(args.epoch):
# early stopping in finetune
if stop_count == 50 or increase_count == 50:
print('model convergent at epoch {}, early stopping'.format(epoch))
break
epoch_train_loss = 0
epoch_train_acc = 0
model.train()
classifier.train()
for x, y in train_loader:
optimizer.zero_grad()
pred = model(x)
pred = classifier(pred)
step_loss = loss(pred, y)
step_loss.backward()
optimizer.step()
epoch_train_loss += step_loss.item()
epoch_train_acc += torch.sum(torch.argmax(pred.data, axis=1) == y) / len(y)
epoch_train_loss /= num_steps
epoch_train_acc /= num_steps
model.eval()
classifier.eval()
val_loss, val_accu = evaluate(val_loader, model, classifier, loss, device)
if min_val_loss > val_loss:
min_val_loss = val_loss
end_val_epoch = epoch
test_loss, test_accuracy = evaluate(test_loader, model, classifier, loss, device)
if epoch % 100 == 0:
print(
"epoch : {}, train loss: {} , train accuracy : {}, \nval loss : {}, val accuracy : {}, \ntest loss : {}, test accuracy : {}".format(
epoch, epoch_train_loss, epoch_train_acc, val_loss, val_accu, test_loss, test_accuracy))
if abs(last_loss - val_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if val_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = val_loss
test_accuracies.append(test_accuracy)
end_val_epochs.append(end_val_epoch)
t = time.time() - t
train_time += t
print('{} fold finish training'.format(i))
test_accuracies = torch.Tensor(test_accuracies)
end_val_epochs = np.array(end_val_epochs)
save_cls_result(args, test_accu=torch.mean(test_accuracies), test_std=torch.std(test_accuracies),
train_time=train_time / 5, end_val_epoch=np.mean(end_val_epochs), seeds=args.random_seed)
print('Done!')
================================================
FILE: ts_classification_methods/ts2vec_cls/__init__.py
================================================
================================================
FILE: ts_classification_methods/ts2vec_cls/datautils.py
================================================
import os
import numpy as np
import pandas as pd
import math
import random
from datetime import datetime
import pickle
from utils import pkl_load, pad_nan_to_target
from scipy.io.arff import loadarff
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.model_selection import StratifiedKFold
def load_UCR(dataset):
dataroot = 'datasets/UCR'
dataroot = '/dev_data/lz/time_series_pretrain/datasets/UCRArchive_2018'
train_file = os.path.join(dataroot, dataset, dataset + "_TRAIN.tsv")
test_file = os.path.join(dataroot, dataset, dataset + "_TEST.tsv")
train_df = pd.read_csv(train_file, sep='\t', header=None)
test_df = pd.read_csv(test_file, sep='\t', header=None)
train_array = np.array(train_df)
test_array = np.array(test_df)
# Move the labels to {0, ..., L-1}
labels = np.unique(train_array[:, 0])
transform = {}
for i, l in enumerate(labels):
transform[l] = i
train = train_array[:, 1:].astype(np.float64)
train_labels = np.vectorize(transform.get)(train_array[:, 0])
test = test_array[:, 1:].astype(np.float64)
test_labels = np.vectorize(transform.get)(test_array[:, 0])
# Normalization for non-normalized datasets
# To keep the amplitude information, we do not normalize values over
# individual time series, but on the whole dataset
if dataset not in [
'AllGestureWiimoteX',
'AllGestureWiimoteY',
'AllGestureWiimoteZ',
'BME',
'Chinatown',
'Crop',
'EOGHorizontalSignal',
'EOGVerticalSignal',
'Fungi',
'GestureMidAirD1',
'GestureMidAirD2',
'GestureMidAirD3',
'GesturePebbleZ1',
'GesturePebbleZ2',
'GunPointAgeSpan',
'GunPointMaleVersusFemale',
'GunPointOldVersusYoung',
'HouseTwenty',
'InsectEPGRegularTrain',
'InsectEPGSmallTrain',
'MelbournePedestrian',
'PickupGestureWiimoteZ',
'PigAirwayPressure',
'PigArtPressure',
'PigCVP',
'PLAID',
'PowerCons',
'Rock',
'SemgHandGenderCh2',
'SemgHandMovementCh2',
'SemgHandSubjectCh2',
'ShakeGestureWiimoteZ',
'SmoothSubspace',
'UMD'
]:
return train[..., np.newaxis], train_labels, test[..., np.newaxis], test_labels
mean = np.nanmean(train)
std = np.nanstd(train)
train = (train - mean) / std
test = (test - mean) / std
return train[..., np.newaxis], train_labels, test[..., np.newaxis], test_labels
def load_UEA(dataset):
train_data = loadarff(f'datasets/UEA/{dataset}/{dataset}_TRAIN.arff')[0]
test_data = loadarff(f'datasets/UEA/{dataset}/{dataset}_TEST.arff')[0]
def extract_data(data):
res_data = []
res_labels = []
for t_data, t_label in data:
t_data = np.array([ d.tolist() for d in t_data ])
t_label = t_label.decode("utf-8")
res_data.append(t_data)
res_labels.append(t_label)
return np.array(res_data).swapaxes(1, 2), np.array(res_labels)
train_X, train_y = extract_data(train_data)
test_X, test_y = extract_data(test_data)
scaler = StandardScaler()
scaler.fit(train_X.reshape(-1, train_X.shape[-1]))
train_X = scaler.transform(train_X.reshape(-1, train_X.shape[-1])).reshape(train_X.shape)
test_X = scaler.transform(test_X.reshape(-1, test_X.shape[-1])).reshape(test_X.shape)
labels = np.unique(train_y)
transform = { k : i for i, k in enumerate(labels)}
train_y = np.vectorize(transform.get)(train_y)
test_y = np.vectorize(transform.get)(test_y)
return train_X, train_y, test_X, test_y
def load_forecast_npy(name, univar=False):
data = np.load(f'datasets/{name}.npy')
if univar:
data = data[: -1:]
train_slice = slice(None, int(0.6 * len(data)))
valid_slice = slice(int(0.6 * len(data)), int(0.8 * len(data)))
test_slice = slice(int(0.8 * len(data)), None)
scaler = StandardScaler().fit(data[train_slice])
data = scaler.transform(data)
data = np.expand_dims(data, 0)
pred_lens = [24, 48, 96, 288, 672]
return data, train_slice, valid_slice, test_slice, scaler, pred_lens, 0
def _get_time_features(dt):
return np.stack([
dt.minute.to_numpy(),
dt.hour.to_numpy(),
dt.dayofweek.to_numpy(),
dt.day.to_numpy(),
dt.dayofyear.to_numpy(),
dt.month.to_numpy(),
dt.weekofyear.to_numpy(),
], axis=1).astype(np.float)
def load_forecast_csv(name, univar=False):
data = pd.read_csv(f'datasets/{name}.csv', index_col='date', parse_dates=True)
dt_embed = _get_time_features(data.index)
n_covariate_cols = dt_embed.shape[-1]
if univar:
if name in ('ETTh1', 'ETTh2', 'ETTm1', 'ETTm2'):
data = data[['OT']]
elif name == 'electricity':
data = data[['MT_001']]
else:
data = data.iloc[:, -1:]
data = data.to_numpy()
if name == 'ETTh1' or name == 'ETTh2':
train_slice = slice(None, 12*30*24)
valid_slice = slice(12*30*24, 16*30*24)
test_slice = slice(16*30*24, 20*30*24)
elif name == 'ETTm1' or name == 'ETTm2':
train_slice = slice(None, 12*30*24*4)
valid_slice = slice(12*30*24*4, 16*30*24*4)
test_slice = slice(16*30*24*4, 20*30*24*4)
else:
train_slice = slice(None, int(0.6 * len(data)))
valid_slice = slice(int(0.6 * len(data)), int(0.8 * len(data)))
test_slice = slice(int(0.8 * len(data)), None)
scaler = StandardScaler().fit(data[train_slice])
data = scaler.transform(data)
if name in ('electricity'):
data = np.expand_dims(data.T, -1) # Each variable is an instance rather than a feature
else:
data = np.expand_dims(data, 0)
if n_covariate_cols > 0:
dt_scaler = StandardScaler().fit(dt_embed[train_slice])
dt_embed = np.expand_dims(dt_scaler.transform(dt_embed), 0)
data = np.concatenate([np.repeat(dt_embed, data.shape[0], axis=0), data], axis=-1)
if name in ('ETTh1', 'ETTh2', 'electricity'):
pred_lens = [24, 48, 168, 336, 720]
else:
pred_lens = [24, 48, 96, 288, 672]
return data, train_slice, valid_slice, test_slice, scaler, pred_lens, n_covariate_cols
def load_anomaly(name):
res = pkl_load(f'datasets/{name}.pkl')
return res['all_train_data'], res['all_train_labels'], res['all_train_timestamps'], \
res['all_test_data'], res['all_test_labels'], res['all_test_timestamps'], \
res['delay']
def gen_ano_train_data(all_train_data):
maxl = np.max([ len(all_train_data[k]) for k in all_train_data ])
pretrain_data = []
for k in all_train_data:
train_data = pad_nan_to_target(all_train_data[k], maxl, axis=0)
pretrain_data.append(train_data)
pretrain_data = np.expand_dims(np.stack(pretrain_data), 2)
return pretrain_data
================================================
FILE: ts_classification_methods/ts2vec_cls/models/__init__.py
================================================
from .encoder import TSEncoder
================================================
FILE: ts_classification_methods/ts2vec_cls/models/dilated_conv.py
================================================
import torch
from torch import nn
import torch.nn.functional as F
import numpy as np
class SamePadConv(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, dilation=1, groups=1):
super().__init__()
self.receptive_field = (kernel_size - 1) * dilation + 1
padding = self.receptive_field // 2
self.conv = nn.Conv1d(
in_channels, out_channels, kernel_size,
padding=padding,
dilation=dilation,
groups=groups
)
self.remove = 1 if self.receptive_field % 2 == 0 else 0
def forward(self, x):
out = self.conv(x)
if self.remove > 0:
out = out[:, :, : -self.remove]
return out
class ConvBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, dilation, final=False):
super().__init__()
self.conv1 = SamePadConv(in_channels, out_channels, kernel_size, dilation=dilation)
self.conv2 = SamePadConv(out_channels, out_channels, kernel_size, dilation=dilation)
self.projector = nn.Conv1d(in_channels, out_channels, 1) if in_channels != out_channels or final else None
def forward(self, x):
residual = x if self.projector is None else self.projector(x)
x = F.gelu(x)
x = self.conv1(x)
x = F.gelu(x)
x = self.conv2(x)
return x + residual
class DilatedConvEncoder(nn.Module):
def __init__(self, in_channels, channels, kernel_size):
super().__init__()
self.net = nn.Sequential(*[
ConvBlock(
channels[i-1] if i > 0 else in_channels,
channels[i],
kernel_size=kernel_size,
dilation=2**i,
final=(i == len(channels)-1)
)
for i in range(len(channels))
])
def forward(self, x):
return self.net(x)
================================================
FILE: ts_classification_methods/ts2vec_cls/models/encoder.py
================================================
import torch
from torch import nn
import torch.nn.functional as F
import numpy as np
from .dilated_conv import DilatedConvEncoder
def generate_continuous_mask(B, T, n=5, l=0.1):
res = torch.full((B, T), True, dtype=torch.bool)
if isinstance(n, float):
n = int(n * T)
n = max(min(n, T // 2), 1)
if isinstance(l, float):
l = int(l * T)
l = max(l, 1)
for i in range(B):
for _ in range(n):
t = np.random.randint(T-l+1)
res[i, t:t+l] = False
return res
def generate_binomial_mask(B, T, p=0.5):
return torch.from_numpy(np.random.binomial(1, p, size=(B, T))).to(torch.bool)
class TSEncoder(nn.Module):
def __init__(self, input_dims, output_dims, hidden_dims=64, depth=10, mask_mode='binomial'):
super().__init__()
self.input_dims = input_dims
self.output_dims = output_dims
self.hidden_dims = hidden_dims
self.mask_mode = mask_mode
self.input_fc = nn.Linear(input_dims, hidden_dims)
self.feature_extractor = DilatedConvEncoder(
hidden_dims,
[hidden_dims] * depth + [output_dims],
kernel_size=3
)
self.repr_dropout = nn.Dropout(p=0.1)
def forward(self, x, mask=None): # x: B x T x input_dims
nan_mask = ~x.isnan().any(axis=-1)
x[~nan_mask] = 0
x = self.input_fc(x) # B x T x Ch
# generate & apply mask
if mask is None:
if self.training:
mask = self.mask_mode
else:
mask = 'all_true'
if mask == 'binomial':
mask = generate_binomial_mask(x.size(0), x.size(1)).to(x.device)
elif mask == 'continuous':
mask = generate_continuous_mask(x.size(0), x.size(1)).to(x.device)
elif mask == 'all_true':
mask = x.new_full((x.size(0), x.size(1)), True, dtype=torch.bool)
elif mask == 'all_false':
mask = x.new_full((x.size(0), x.size(1)), False, dtype=torch.bool)
elif mask == 'mask_last':
mask = x.new_full((x.size(0), x.size(1)), True, dtype=torch.bool)
mask[:, -1] = False
mask &= nan_mask
x[~mask] = 0
# conv encoder
x = x.transpose(1, 2) # B x Ch x T
x = self.repr_dropout(self.feature_extractor(x)) # B x Co x T
x = x.transpose(1, 2) # B x T x Co
return x
================================================
FILE: ts_classification_methods/ts2vec_cls/models/losses.py
================================================
import torch
from torch import nn
import torch.nn.functional as F
def hierarchical_contrastive_loss(z1, z2, alpha=0.5, temporal_unit=0):
loss = torch.tensor(0., device=z1.device)
d = 0
while z1.size(1) > 1:
if alpha != 0:
loss += alpha * instance_contrastive_loss(z1, z2)
if d >= temporal_unit:
if 1 - alpha != 0:
loss += (1 - alpha) * temporal_contrastive_loss(z1, z2)
d += 1
z1 = F.max_pool1d(z1.transpose(1, 2), kernel_size=2).transpose(1, 2)
z2 = F.max_pool1d(z2.transpose(1, 2), kernel_size=2).transpose(1, 2)
if z1.size(1) == 1:
if alpha != 0:
loss += alpha * instance_contrastive_loss(z1, z2)
d += 1
return loss / d
def instance_contrastive_loss(z1, z2):
B, T = z1.size(0), z1.size(1)
if B == 1:
return z1.new_tensor(0.)
z = torch.cat([z1, z2], dim=0) # 2B x T x C
z = z.transpose(0, 1) # T x 2B x C
sim = torch.matmul(z, z.transpose(1, 2)) # T x 2B x 2B
logits = torch.tril(sim, diagonal=-1)[:, :, :-1] # T x 2B x (2B-1)
logits += torch.triu(sim, diagonal=1)[:, :, 1:]
logits = -F.log_softmax(logits, dim=-1)
i = torch.arange(B, device=z1.device)
loss = (logits[:, i, B + i - 1].mean() + logits[:, B + i, i].mean()) / 2
return loss
def temporal_contrastive_loss(z1, z2):
B, T = z1.size(0), z1.size(1)
if T == 1:
return z1.new_tensor(0.)
z = torch.cat([z1, z2], dim=1) # B x 2T x C
sim = torch.matmul(z, z.transpose(1, 2)) # B x 2T x 2T
logits = torch.tril(sim, diagonal=-1)[:, :, :-1] # B x 2T x (2T-1)
logits += torch.triu(sim, diagonal=1)[:, :, 1:]
logits = -F.log_softmax(logits, dim=-1)
t = torch.arange(T, device=z1.device)
loss = (logits[:, t, T + t - 1].mean() + logits[:, T + t, t].mean()) / 2
return loss
================================================
FILE: ts_classification_methods/ts2vec_cls/result/ts2vec_tsm_train_val_b8_single_norm_0409_cls_result.csv
================================================
id,dataset_name,test_accuracy,test_std,train_time,end_val_epoch,seeds
0,ACSF1,0.88,0.0371,28.4825,0.0,42
1,Adiac,0.8066,0.0652,5.3666,0.0,42
2,AllGestureWiimoteX,0.782,0.0164,16.295,0.0,42
3,AllGestureWiimoteY,0.84,0.015,16.5032,0.0,42
4,AllGestureWiimoteZ,0.773,0.0527,16.4974,0.0,42
5,ArrowHead,0.8908,0.0402,5.1086,0.0,42
6,BME,0.9944,0.0124,4.5481,0.0,42
7,Beef,0.6833,0.1807,5.6727,0.0,42
8,BeetleFly,0.925,0.1118,5.8705,0.0,42
9,BirdChicken,0.975,0.0559,5.8268,0.0,42
10,CBF,1.0,0.0,5.0538,0.0,42
11,Car,0.8833,0.0349,6.2595,0.0,42
12,Chinatown,0.9726,0.0217,3.9859,0.0,42
13,ChlorineConcentration,0.9998,0.0005,34.2372,0.0,42
14,CinCECGTorso,0.9972,0.0029,32.5504,0.0,42
15,Coffee,1.0,0.0,4.9984,0.0,42
16,Computers,0.6940000000000001,0.0422,18.2591,0.0,42
17,CricketX,0.8321,0.0394,13.9189,0.0,42
18,CricketY,0.8256,0.0461,13.9781,0.0,42
19,CricketZ,0.8487,0.0242,13.9948,0.0,42
20,Crop,0.7448,0.009000000000000001,154.3218,0.0,42
21,DiatomSizeReduction,0.9969,0.0069,5.4306,0.0,42
22,DistalPhalanxOutlineAgeGroup,0.8125,0.0295,4.6066,0.0,42
23,DistalPhalanxOutlineCorrect,0.8185,0.0388,5.0256,0.0,42
24,DistalPhalanxTW,0.7792,0.0384,4.5488,0.0,42
25,DodgerLoopDay,0.6391,0.0575,5.1768,0.0,42
26,DodgerLoopGame,0.9563,0.0523,5.1137,0.0,42
27,DodgerLoopWeekend,0.9812,0.027999999999999997,5.0872,0.0,42
28,ECG200,0.88,0.0112,4.4465,0.0,42
29,ECG5000,0.9528,0.0059,24.8707,0.0,42
30,ECGFiveDays,1.0,0.0,4.9662,0.0,42
31,EOGHorizontalSignal,0.7279,0.0204,25.4733,0.0,42
32,EOGVerticalSignal,0.6519,0.0083,25.5223,0.0,42
33,Earthquakes,0.7983,0.0056,15.8322,0.0,42
34,ElectricDevices,0.8848,0.0042,915.319,0.0,42
35,EthanolLevel,0.5897,0.032,35.0616,0.0,42
36,FaceAll,0.9871,0.004,15.0294,0.0,42
37,FaceFour,0.9640000000000001,0.0379,5.506,0.0,42
38,FacesUCR,0.9902,0.004,15.3127,0.0,42
39,FiftyWords,0.8022,0.0306,14.235999999999999,0.0,42
40,Fish,0.9343,0.0329,5.8832,0.0,42
41,FordA,0.9303,0.0044,39.9856,0.0,42
42,FordB,0.9132,0.0134,35.6743,0.0,42
43,FreezerRegularTrain,0.9977,0.0009,18.0955,0.0,42
44,FreezerSmallTrain,0.9969,0.0036,17.025,0.0,42
45,Fungi,1.0,0.0,4.8975,0.0,42
46,GestureMidAirD1,0.6479,0.0531,5.5628,0.0,42
47,GestureMidAirD2,0.527,0.1203,5.5954,0.0,42
48,GestureMidAirD3,0.3315,0.0435,5.5434,0.0,42
49,GesturePebbleZ1,0.9507,0.0327,5.9473,0.0,42
50,GesturePebbleZ2,0.9473,0.0215,5.8398,0.0,42
51,GunPoint,0.995,0.0112,4.7644,0.0,42
52,GunPointAgeSpan,0.9889,0.0079,4.8011,0.0,42
53,GunPointMaleVersusFemale,0.9956,0.0061,4.7437,0.0,42
54,GunPointOldVersusYoung,0.9956,0.0061,4.8197,0.0,42
55,Ham,0.8741,0.0346,5.7141,0.0,42
56,HandOutlines,0.9168,0.0142,57.6561,0.0,42
57,Haptics,0.5594,0.0468,22.9492,0.0,42
58,Herring,0.6806,0.0757,5.9701,0.0,42
59,HouseTwenty,0.9496,0.0358,38.364000000000004,0.0,42
60,InlineSkate,0.6415,0.0311,36.4749,0.0,42
61,InsectEPGRegularTrain,0.9807,0.006999999999999999,16.8548,0.0,42
62,InsectEPGSmallTrain,0.9699,0.0254,6.4435,0.0,42
63,InsectWingbeatSound,0.7077,0.0057,16.7995,0.0,42
64,ItalyPowerDemand,0.9745,0.0083,4.5284,0.0,42
65,LargeKitchenAppliances,0.9147,0.0119,18.617,0.0,42
66,Lightning2,0.901,0.0365,6.4622,0.0,42
67,Lightning7,0.8389,0.06,5.2601,0.0,42
68,Mallat,0.9962,0.0023,26.1331,0.0,42
69,Meat,1.0,0.0,6.2323,0.0,42
70,MedicalImages,0.8431,0.0394,6.0271,0.0,42
71,MelbournePedestrian,0.8986,0.0059,20.6329,0.0,42
72,MiddlePhalanxOutlineAgeGroup,0.7347,0.0647,4.865,0.0,42
73,MiddlePhalanxOutlineCorrect,0.8507,0.0192,5.2434,0.0,42
74,MiddlePhalanxTW,0.6455,0.0219,5.0957,0.0,42
75,MixedShapesRegularTrain,0.945,0.006999999999999999,26.7892,0.0,42
76,MixedShapesSmallTrain,0.941,0.0081,25.0754,0.0,42
77,MoteStrain,0.9717,0.0032,5.2492,0.0,42
78,NonInvasiveFetalECGThorax1,0.9392,0.011,28.4395,0.0,42
79,NonInvasiveFetalECGThorax2,0.9490000000000001,0.0088,28.016,0.0,42
80,OSULeaf,0.8937,0.0292,14.9081,0.0,42
81,OliveOil,0.85,0.0697,6.1428,0.0,42
82,PLAID,0.5503,0.0361,110.9831,0.0,42
83,PhalangesOutlinesCorrect,0.8439,0.0156,17.1865,0.0,42
84,Phoneme,0.4204,0.0122,26.7327,0.0,42
85,PickupGestureWiimoteZ,0.84,0.0548,5.4558,0.0,42
86,PigAirwayPressure,0.4038,0.0475,38.5497,0.0,42
87,PigArtPressure,0.9424,0.0307,38.7209,0.0,42
88,PigCVP,0.8753,0.0514,38.5487,0.0,42
89,Plane,0.9905,0.0213,5.0078,0.0,42
90,PowerCons,0.9861,0.0098,4.8548,0.0,42
91,ProximalPhalanxOutlineAgeGroup,0.8545,0.0344,4.6571,0.0,42
92,ProximalPhalanxOutlineCorrect,0.8833,0.0257,4.9042,0.0,42
93,ProximalPhalanxTW,0.8281,0.0179,4.646,0.0,42
94,RefrigerationDevices,0.7627,0.0234,18.778,0.0,42
95,Rock,0.8286,0.0814,58.8638,0.0,42
96,ScreenType,0.552,0.0338,18.7799,0.0,42
97,SemgHandGenderCh2,0.9544,0.0234,29.3616,0.0,42
98,SemgHandMovementCh2,0.7767,0.0416,29.8932,0.0,42
99,SemgHandSubjectCh2,0.9344,0.0169,29.4465,0.0,42
100,ShakeGestureWiimoteZ,0.93,0.0447,5.6136,0.0,42
101,ShapeletSim,0.985,0.0137,5.9648,0.0,42
102,ShapesAll,0.9133,0.0104,17.5754,0.0,42
103,SmallKitchenAppliances,0.7347,0.0674,18.6373,0.0,42
104,SmoothSubspace,0.9633,0.0361,3.8639,0.0,42
105,SonyAIBORobotSurface1,0.9952,0.0044,4.5268,0.0,42
106,SonyAIBORobotSurface2,0.9949,0.0051,4.6777,0.0,42
107,StarLightCurves,0.9801,0.0042,77.98,0.0,42
108,Strawberry,0.9685,0.0132,29.3446,0.0,42
109,SwedishLeaf,0.9502,0.0262,5.5732,0.0,42
110,Symbols,0.9863,0.0122,15.0173,0.0,42
111,SyntheticControl,0.9983,0.0037,4.5299,0.0,42
112,ToeSegmentation1,0.9588,0.041,5.1871,0.0,42
113,ToeSegmentation2,0.9517,0.0273,5.4543,0.0,42
114,Trace,1.0,0.0,5.1909,0.0,42
115,TwoLeadECG,0.9991,0.0019,5.1407,0.0,42
116,TwoPatterns,1.0,0.0,31.0201,0.0,42
117,UMD,0.9944,0.0124,4.8644,0.0,42
118,UWaveGestureLibraryAll,0.9652,0.0106,34.1178,0.0,42
119,UWaveGestureLibraryX,0.8513,0.0161,26.5674,0.0,42
120,UWaveGestureLibraryY,0.7874,0.0046,29.9703,0.0,42
121,UWaveGestureLibraryZ,0.8024,0.0141,27.9695,0.0,42
122,Wafer,0.999,0.0006,23.6,0.0,42
123,Wine,0.9648,0.0568,5.0664,0.0,42
124,WordSynonyms,0.7989,0.0186,13.976,0.0,42
125,Worms,0.721,0.0309,20.011,0.0,42
126,WormsTwoClass,0.7716,0.0727,20.1901,0.0,42
127,Yoga,0.9709,0.0058,26.4065,0.00,42
================================================
FILE: ts_classification_methods/ts2vec_cls/scripts/generator_ts2vec.py
================================================
ucr_dataset = ['ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY', 'AllGestureWiimoteZ', 'ArrowHead', 'BME',
'Beef',
'BeetleFly', 'BirdChicken', 'CBF', 'Car', 'Chinatown', 'ChlorineConcentration', 'CinCECGTorso', 'Coffee',
'Computers',
'CricketX', 'CricketY', 'CricketZ', 'Crop', 'DiatomSizeReduction', 'DistalPhalanxOutlineAgeGroup',
'DistalPhalanxOutlineCorrect', 'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame', 'DodgerLoopWeekend',
'ECG200', 'ECG5000', 'ECGFiveDays', 'EOGHorizontalSignal', 'EOGVerticalSignal', 'Earthquakes',
'ElectricDevices',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords', 'Fish', 'FordA', 'FordB',
'FreezerRegularTrain',
'FreezerSmallTrain', 'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3', 'GesturePebbleZ1',
'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan', 'GunPointMaleVersusFemale', 'GunPointOldVersusYoung',
'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate', 'InsectEPGRegularTrain',
'InsectEPGSmallTrain',
'InsectWingbeatSound', 'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2', 'Lightning7',
'Mallat', 'Meat',
'MedicalImages', 'MelbournePedestrian', 'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain', 'MoteStrain',
'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OSULeaf', 'OliveOil', 'PLAID', 'PhalangesOutlinesCorrect', 'Phoneme',
'PickupGestureWiimoteZ', 'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'Plane', 'PowerCons',
'ProximalPhalanxOutlineAgeGroup', 'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices',
'Rock', 'ScreenType', 'SemgHandGenderCh2', 'SemgHandMovementCh2', 'SemgHandSubjectCh2',
'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace', 'SonyAIBORobotSurface1',
'SonyAIBORobotSurface2', 'StarLightCurves', 'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG', 'TwoPatterns', 'UMD',
'UWaveGestureLibraryAll',
'UWaveGestureLibraryX', 'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine', 'WordSynonyms',
'Worms',
'WormsTwoClass', 'Yoga']
i = 0
for dataset in ucr_dataset:
print("i = ", i, "dataset_name = ", dataset)
i = i + 1
# '''
# python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Coffee --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1
# '''
# with open('/SSD/lz/time_tsm/ts2vec_cls/scripts/ts2vec_fcn_set_norm.sh', 'a') as f:
# f.write('python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 '
# '--dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set '
# '--dataset ' + dataset
# + ' --fcn_epoch 1000 --gpu 1 --batch-size 8 ' +
# ' --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1' + ';\n')
#
# '''
# python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_single_norm_0404_ --cuda cuda:1
# '''
# with open('/SSD/lz/time_tsm/ts2vec_cls/scripts/ts2vec_fcn_single_norm.sh', 'a') as f:
# f.write('python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 '
# '--dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
# '--dataset ' + dataset
# + ' --fcn_epoch 1000 --gpu 1 --batch-size 8 ' +
# ' --save_csv_name ts2vec_fcn_single_norm_0404_ --cuda cuda:1' + ';\n')
# '''
# python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Coffee --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_
# '''
# with open('/SSD/lz/time_tsm/ts2vec_cls/scripts/ts2vec_tsm_set_norm.sh', 'a') as f:
# f.write('python train_tsm.py '
# '--dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set '
# '--dataset ' + dataset
# + ' --gpu 1 --batch-size 8 ' +
# ' --save_csv_name ts2vec_tsm_set_norm_0404_' + ';\n')
#
'''
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_single_norm_0404_
'''
with open('/SSD/lz/time_tsm/ts2vec_cls/scripts/ts2vec_tsm_single_norm.sh', 'a') as f:
f.write('python train_tsm.py '
'--dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
'--dataset ' + dataset
+ ' --gpu 1 --batch-size 8 ' +
' --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_' + ';\n')
i = 0
for dataset in ucr_dataset:
print("i = ", i, "dataset_name = ", dataset)
i = i + 1
'''
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_single_norm_0404_
'''
with open('/SSD/lz/time_tsm/ts2vec_cls/scripts/ts2vec_tsm_single_norm.sh', 'a') as f:
f.write('python train_tsm.py '
'--dataroot /SSD/lz/UCRArchive_2018 --normalize_way single '
'--dataset ' + dataset
+ ' --gpu 1 --batch-size 16 ' +
' --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_' + ';\n')
## nohup ./scripts/ts2vec_fcn_set_norm.sh &
## nohup ./scripts/ts2vec_fcn_single_norm.sh &
## nohup ./scripts/ts2vec_tsm_set_norm.sh &
## nohup ./scripts/ts2vec_tsm_single_norm.sh &
================================================
FILE: ts_classification_methods/ts2vec_cls/scripts/generator_ts2vec_uea.py
================================================
uea_all = ['ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions', 'CharacterTrajectories',
'Cricket', 'DuckDuckGeese', 'EigenWorms', 'Epilepsy', 'EthanolConcentration', 'ERing',
'FaceDetection', 'FingerMovements', 'HandMovementDirection', 'Handwriting',
'Heartbeat', 'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PenDigits', 'PEMS-SF', 'PhonemeSpectra', 'RacketSports', 'SelfRegulationSCP1',
'SelfRegulationSCP2', 'SpokenArabicDigits', 'StandWalkJump', 'UWaveGestureLibrary']
i = 0
for dataset in uea_all:
print("i = ", i, "dataset_name = ", dataset)
i = i + 1
'''
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset BasicMotions --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_
'''
with open('/SSD/lz/time_tsm/ts2vec_cls/scripts/ts2vec_tsm_uea.sh', 'a') as f:
f.write('python train_tsm_uea.py '
'--dataroot /SSD/lz/Multivariate2018_arff '
'--dataset ' + dataset
+ ' --gpu 1 --batch-size 8 ' +
' --save_csv_name ts2vec_tsm_uea_0423_' + ';\n')
## nohup ./scripts/ts2vec_tsm_uea.sh &
================================================
FILE: ts_classification_methods/ts2vec_cls/scripts/ts2vec_fcn_set_norm.sh
================================================
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ACSF1 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Adiac --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset AllGestureWiimoteX --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset AllGestureWiimoteY --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset AllGestureWiimoteZ --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ArrowHead --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset BME --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Beef --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset BeetleFly --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset BirdChicken --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CBF --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Car --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Chinatown --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ChlorineConcentration --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CinCECGTorso --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Coffee --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Computers --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CricketX --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CricketY --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CricketZ --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Crop --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DiatomSizeReduction --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DistalPhalanxOutlineAgeGroup --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DistalPhalanxOutlineCorrect --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DistalPhalanxTW --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DodgerLoopDay --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DodgerLoopGame --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DodgerLoopWeekend --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ECG200 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ECG5000 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ECGFiveDays --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset EOGHorizontalSignal --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset EOGVerticalSignal --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Earthquakes --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ElectricDevices --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset EthanolLevel --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FaceAll --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FaceFour --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FacesUCR --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FiftyWords --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Fish --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FordA --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FordB --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FreezerRegularTrain --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FreezerSmallTrain --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Fungi --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GestureMidAirD1 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GestureMidAirD2 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GestureMidAirD3 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GesturePebbleZ1 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GesturePebbleZ2 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GunPoint --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GunPointAgeSpan --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GunPointMaleVersusFemale --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GunPointOldVersusYoung --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Ham --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset HandOutlines --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Haptics --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Herring --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset HouseTwenty --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset InlineSkate --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset InsectEPGRegularTrain --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset InsectEPGSmallTrain --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset InsectWingbeatSound --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ItalyPowerDemand --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset LargeKitchenAppliances --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Lightning2 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Lightning7 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Mallat --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Meat --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MedicalImages --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MelbournePedestrian --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MiddlePhalanxOutlineAgeGroup --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MiddlePhalanxOutlineCorrect --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MiddlePhalanxTW --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MixedShapesRegularTrain --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MixedShapesSmallTrain --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MoteStrain --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset NonInvasiveFetalECGThorax1 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset NonInvasiveFetalECGThorax2 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset OSULeaf --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset OliveOil --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PLAID --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PhalangesOutlinesCorrect --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Phoneme --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PickupGestureWiimoteZ --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PigAirwayPressure --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PigArtPressure --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PigCVP --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Plane --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PowerCons --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ProximalPhalanxOutlineAgeGroup --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ProximalPhalanxOutlineCorrect --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ProximalPhalanxTW --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset RefrigerationDevices --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Rock --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ScreenType --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SemgHandGenderCh2 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SemgHandMovementCh2 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SemgHandSubjectCh2 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ShakeGestureWiimoteZ --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ShapeletSim --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ShapesAll --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SmallKitchenAppliances --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SmoothSubspace --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SonyAIBORobotSurface1 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SonyAIBORobotSurface2 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset StarLightCurves --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Strawberry --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SwedishLeaf --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Symbols --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SyntheticControl --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ToeSegmentation1 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ToeSegmentation2 --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Trace --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset TwoLeadECG --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset TwoPatterns --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UMD --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UWaveGestureLibraryAll --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UWaveGestureLibraryX --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UWaveGestureLibraryY --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UWaveGestureLibraryZ --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Wafer --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Wine --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset WordSynonyms --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Worms --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset WormsTwoClass --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
python train_fcn.py --backbone fcn --classifier nonlinear --classifier_input 128 --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Yoga --fcn_epoch 1000 --gpu 1 --batch-size 8 --loss cross_entropy --save_csv_name ts2vec_fcn_set_norm_0404_ --cuda cuda:1;
================================================
FILE: ts_classification_methods/ts2vec_cls/scripts/ts2vec_fcn_single_norm.sh
================================================
================================================
FILE: ts_classification_methods/ts2vec_cls/scripts/ts2vec_tsm_set_norm.sh
================================================
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ACSF1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Adiac --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset AllGestureWiimoteX --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset AllGestureWiimoteY --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset AllGestureWiimoteZ --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ArrowHead --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset BME --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Beef --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset BeetleFly --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset BirdChicken --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CBF --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Car --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Chinatown --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ChlorineConcentration --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CinCECGTorso --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Coffee --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Computers --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CricketX --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CricketY --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset CricketZ --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Crop --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DiatomSizeReduction --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DistalPhalanxOutlineAgeGroup --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DistalPhalanxOutlineCorrect --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DistalPhalanxTW --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DodgerLoopDay --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DodgerLoopGame --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset DodgerLoopWeekend --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ECG200 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ECG5000 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ECGFiveDays --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset EOGHorizontalSignal --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset EOGVerticalSignal --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Earthquakes --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ElectricDevices --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset EthanolLevel --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FaceAll --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FaceFour --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FacesUCR --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FiftyWords --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Fish --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FordA --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FordB --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FreezerRegularTrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset FreezerSmallTrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Fungi --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GestureMidAirD1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GestureMidAirD2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GestureMidAirD3 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GesturePebbleZ1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GesturePebbleZ2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GunPoint --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GunPointAgeSpan --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GunPointMaleVersusFemale --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset GunPointOldVersusYoung --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Ham --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset HandOutlines --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Haptics --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Herring --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset HouseTwenty --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset InlineSkate --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset InsectEPGRegularTrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset InsectEPGSmallTrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset InsectWingbeatSound --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ItalyPowerDemand --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset LargeKitchenAppliances --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Lightning2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Lightning7 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Mallat --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Meat --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MedicalImages --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MelbournePedestrian --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MiddlePhalanxOutlineAgeGroup --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MiddlePhalanxOutlineCorrect --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MiddlePhalanxTW --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MixedShapesRegularTrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MixedShapesSmallTrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset MoteStrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset NonInvasiveFetalECGThorax1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset NonInvasiveFetalECGThorax2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset OSULeaf --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset OliveOil --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PLAID --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PhalangesOutlinesCorrect --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Phoneme --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PickupGestureWiimoteZ --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PigAirwayPressure --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PigArtPressure --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PigCVP --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Plane --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset PowerCons --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ProximalPhalanxOutlineAgeGroup --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ProximalPhalanxOutlineCorrect --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ProximalPhalanxTW --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset RefrigerationDevices --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Rock --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ScreenType --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SemgHandGenderCh2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SemgHandMovementCh2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SemgHandSubjectCh2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ShakeGestureWiimoteZ --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ShapeletSim --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ShapesAll --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SmallKitchenAppliances --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SmoothSubspace --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SonyAIBORobotSurface1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SonyAIBORobotSurface2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset StarLightCurves --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Strawberry --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SwedishLeaf --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Symbols --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset SyntheticControl --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ToeSegmentation1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset ToeSegmentation2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Trace --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset TwoLeadECG --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset TwoPatterns --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UMD --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UWaveGestureLibraryAll --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UWaveGestureLibraryX --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UWaveGestureLibraryY --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset UWaveGestureLibraryZ --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Wafer --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Wine --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset WordSynonyms --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Worms --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset WormsTwoClass --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way train_set --dataset Yoga --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_set_norm_0404_;
================================================
FILE: ts_classification_methods/ts2vec_cls/scripts/ts2vec_tsm_single_norm.sh
================================================
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ACSF1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Adiac --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteX --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteY --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteZ --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ArrowHead --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BME --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Beef --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BeetleFly --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BirdChicken --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CBF --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Car --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Chinatown --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ChlorineConcentration --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CinCECGTorso --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Computers --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketX --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketY --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketZ --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Crop --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DiatomSizeReduction --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineAgeGroup --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineCorrect --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxTW --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopDay --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopGame --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopWeekend --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG200 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG5000 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECGFiveDays --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGHorizontalSignal --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGVerticalSignal --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Earthquakes --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ElectricDevices --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EthanolLevel --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceAll --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceFour --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FacesUCR --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FiftyWords --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fish --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordA --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordB --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerRegularTrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerSmallTrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fungi --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD3 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPoint --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointAgeSpan --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointMaleVersusFemale --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointOldVersusYoung --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Ham --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HandOutlines --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Haptics --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Herring --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HouseTwenty --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InlineSkate --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGRegularTrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGSmallTrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectWingbeatSound --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ItalyPowerDemand --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset LargeKitchenAppliances --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning7 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Mallat --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Meat --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MedicalImages --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MelbournePedestrian --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineAgeGroup --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineCorrect --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxTW --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesRegularTrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesSmallTrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MoteStrain --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OSULeaf --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OliveOil --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PLAID --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PhalangesOutlinesCorrect --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Phoneme --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PickupGestureWiimoteZ --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigAirwayPressure --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigArtPressure --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigCVP --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Plane --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PowerCons --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineAgeGroup --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineCorrect --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxTW --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset RefrigerationDevices --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Rock --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ScreenType --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandGenderCh2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandMovementCh2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandSubjectCh2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShakeGestureWiimoteZ --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapeletSim --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapesAll --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmallKitchenAppliances --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmoothSubspace --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset StarLightCurves --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Strawberry --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SwedishLeaf --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Symbols --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SyntheticControl --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Trace --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoLeadECG --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoPatterns --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UMD --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryAll --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryX --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryY --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryZ --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wafer --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wine --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WordSynonyms --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Worms --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WormsTwoClass --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Yoga --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_train_val_b8_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ACSF1 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Adiac --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteX --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteY --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset AllGestureWiimoteZ --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ArrowHead --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BME --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Beef --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BeetleFly --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset BirdChicken --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CBF --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Car --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Chinatown --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ChlorineConcentration --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CinCECGTorso --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Coffee --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Computers --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketX --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketY --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset CricketZ --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Crop --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DiatomSizeReduction --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineAgeGroup --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxOutlineCorrect --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DistalPhalanxTW --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopDay --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopGame --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset DodgerLoopWeekend --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG200 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECG5000 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ECGFiveDays --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGHorizontalSignal --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EOGVerticalSignal --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Earthquakes --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ElectricDevices --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset EthanolLevel --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceAll --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FaceFour --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FacesUCR --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FiftyWords --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fish --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordA --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FordB --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerRegularTrain --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset FreezerSmallTrain --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Fungi --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD1 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD2 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GestureMidAirD3 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ1 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GesturePebbleZ2 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPoint --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointAgeSpan --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointMaleVersusFemale --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset GunPointOldVersusYoung --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Ham --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HandOutlines --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Haptics --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Herring --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset HouseTwenty --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InlineSkate --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGRegularTrain --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectEPGSmallTrain --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset InsectWingbeatSound --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ItalyPowerDemand --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset LargeKitchenAppliances --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning2 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Lightning7 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Mallat --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Meat --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MedicalImages --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MelbournePedestrian --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineAgeGroup --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxOutlineCorrect --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MiddlePhalanxTW --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesRegularTrain --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MixedShapesSmallTrain --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset MoteStrain --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax1 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset NonInvasiveFetalECGThorax2 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OSULeaf --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset OliveOil --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PLAID --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PhalangesOutlinesCorrect --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Phoneme --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PickupGestureWiimoteZ --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigAirwayPressure --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigArtPressure --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PigCVP --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Plane --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset PowerCons --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineAgeGroup --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxOutlineCorrect --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ProximalPhalanxTW --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset RefrigerationDevices --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Rock --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ScreenType --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandGenderCh2 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandMovementCh2 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SemgHandSubjectCh2 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShakeGestureWiimoteZ --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapeletSim --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ShapesAll --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmallKitchenAppliances --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SmoothSubspace --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface1 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SonyAIBORobotSurface2 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset StarLightCurves --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Strawberry --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SwedishLeaf --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Symbols --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset SyntheticControl --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation1 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset ToeSegmentation2 --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Trace --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoLeadECG --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset TwoPatterns --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UMD --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryAll --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryX --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryY --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset UWaveGestureLibraryZ --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wafer --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Wine --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WordSynonyms --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Worms --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset WormsTwoClass --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
python train_tsm.py --dataroot /SSD/lz/UCRArchive_2018 --normalize_way single --dataset Yoga --gpu 1 --batch-size 16 --save_csv_name ts2vec_tsm_train_val_b16_single_norm_0409_;
================================================
FILE: ts_classification_methods/ts2vec_cls/scripts/ts2vec_tsm_uea.sh
================================================
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset ArticularyWordRecognition --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset AtrialFibrillation --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset BasicMotions --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset CharacterTrajectories --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset Cricket --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset DuckDuckGeese --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset EigenWorms --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset Epilepsy --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset EthanolConcentration --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset ERing --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset FaceDetection --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset FingerMovements --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset HandMovementDirection --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset Handwriting --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset Heartbeat --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset InsectWingbeat --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset JapaneseVowels --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset Libras --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset LSST --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset MotorImagery --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset NATOPS --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset PenDigits --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset PEMS-SF --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset PhonemeSpectra --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset RacketSports --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset SelfRegulationSCP1 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset SelfRegulationSCP2 --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset SpokenArabicDigits --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset StandWalkJump --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
python train_tsm_uea.py --dataroot /SSD/lz/Multivariate2018_arff --dataset UWaveGestureLibrary --gpu 1 --batch-size 8 --save_csv_name ts2vec_tsm_uea_0423_;
================================================
FILE: ts_classification_methods/ts2vec_cls/tasks/__init__.py
================================================
from .classification import eval_classification
================================================
FILE: ts_classification_methods/ts2vec_cls/tasks/_eval_protocols.py
================================================
import numpy as np
from sklearn.linear_model import Ridge
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import GridSearchCV, train_test_split
def fit_svm(features, y, MAX_SAMPLES=10000):
nb_classes = np.unique(y, return_counts=True)[1].shape[0]
train_size = features.shape[0]
svm = SVC(C=np.inf, gamma='scale')
if train_size // nb_classes < 5 or train_size < 50:
return svm.fit(features, y)
else:
grid_search = GridSearchCV(
svm, {
'C': [
0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000,
np.inf
],
'kernel': ['rbf'],
'degree': [3],
'gamma': ['scale'],
'coef0': [0],
'shrinking': [True],
'probability': [False],
'tol': [0.001],
'cache_size': [200],
'class_weight': [None],
'verbose': [False],
'max_iter': [10000000],
'decision_function_shape': ['ovr'],
'random_state': [None]
},
cv=5, n_jobs=5
)
# If the training set is too large, subsample MAX_SAMPLES examples
if train_size > MAX_SAMPLES:
split = train_test_split(
features, y,
train_size=MAX_SAMPLES, random_state=0, stratify=y
)
features = split[0]
y = split[2]
grid_search.fit(features, y)
return grid_search.best_estimator_
def fit_lr(features, y, MAX_SAMPLES=100000):
# If the training set is too large, subsample MAX_SAMPLES examples
if features.shape[0] > MAX_SAMPLES:
split = train_test_split(
features, y,
train_size=MAX_SAMPLES, random_state=0, stratify=y
)
features = split[0]
y = split[2]
pipe = make_pipeline(
StandardScaler(),
LogisticRegression(
random_state=0,
max_iter=1000000,
multi_class='ovr'
)
)
pipe.fit(features, y)
return pipe
def fit_knn(features, y):
pipe = make_pipeline(
StandardScaler(),
KNeighborsClassifier(n_neighbors=1)
)
pipe.fit(features, y)
return pipe
def fit_ridge(train_features, train_y, valid_features, valid_y, MAX_SAMPLES=100000):
# If the training set is too large, subsample MAX_SAMPLES examples
if train_features.shape[0] > MAX_SAMPLES:
split = train_test_split(
train_features, train_y,
train_size=MAX_SAMPLES, random_state=0
)
train_features = split[0]
train_y = split[2]
if valid_features.shape[0] > MAX_SAMPLES:
split = train_test_split(
valid_features, valid_y,
train_size=MAX_SAMPLES, random_state=0
)
valid_features = split[0]
valid_y = split[2]
alphas = [0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000]
valid_results = []
for alpha in alphas:
lr = Ridge(alpha=alpha).fit(train_features, train_y)
valid_pred = lr.predict(valid_features)
score = np.sqrt(((valid_pred - valid_y) ** 2).mean()) + np.abs(valid_pred - valid_y).mean()
valid_results.append(score)
best_alpha = alphas[np.argmin(valid_results)]
lr = Ridge(alpha=best_alpha)
lr.fit(train_features, train_y)
return lr
================================================
FILE: ts_classification_methods/ts2vec_cls/tasks/classification.py
================================================
import numpy as np
from . import _eval_protocols as eval_protocols
from sklearn.preprocessing import label_binarize
from sklearn.metrics import average_precision_score
def eval_classification(model, train_data, train_labels, test_data, test_labels, eval_protocol='linear'):
assert train_labels.ndim == 1 or train_labels.ndim == 2
train_repr = model.encode(train_data, encoding_window='full_series' if train_labels.ndim == 1 else None)
test_repr = model.encode(test_data, encoding_window='full_series' if train_labels.ndim == 1 else None)
if eval_protocol == 'linear':
fit_clf = eval_protocols.fit_lr
elif eval_protocol == 'svm':
fit_clf = eval_protocols.fit_svm
elif eval_protocol == 'knn':
fit_clf = eval_protocols.fit_knn
else:
assert False, 'unknown evaluation protocol'
def merge_dim01(array):
return array.reshape(array.shape[0]*array.shape[1], *array.shape[2:])
if train_labels.ndim == 2:
train_repr = merge_dim01(train_repr)
train_labels = merge_dim01(train_labels)
test_repr = merge_dim01(test_repr)
test_labels = merge_dim01(test_labels)
clf = fit_clf(train_repr, train_labels)
acc = clf.score(test_repr, test_labels)
train_acc = clf.score(train_repr, train_labels)
if eval_protocol == 'linear':
y_score = clf.predict_proba(test_repr)
else:
y_score = clf.decision_function(test_repr)
test_labels_onehot = label_binarize(test_labels, classes=np.arange(train_labels.max()+1))
auprc = average_precision_score(test_labels_onehot, y_score)
return y_score, {'acc': acc, 'auprc': auprc, 'train_acc': train_acc}
================================================
FILE: ts_classification_methods/ts2vec_cls/train.py
================================================
import torch
import numpy as np
import argparse
import os
import sys
import time
import datetime
from ts2vec.ts2vec import TS2Vec
from ts2vec import datautils, tasks
from ts2vec.utils import init_dl_program, name_with_datetime, pkl_save, data_dropout
def save_checkpoint_callback(
save_every=1,
unit='epoch'
):
assert unit in ('epoch', 'iter')
def callback(model, loss):
n = model.n_epochs if unit == 'epoch' else model.n_iters
if n % save_every == 0:
model.save(f'{run_dir}/model_{n}.pkl')
return callback
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('dataset', help='The dataset name')
parser.add_argument('run_name', help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
parser.add_argument('--loader', type=str, required=True, help='The data loader used to load the experimental data. This can be set to UCR, UEA, forecast_csv, forecast_csv_univar, anomaly, or anomaly_coldstart')
parser.add_argument('--gpu', type=int, default=0, help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch-size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--lr', type=float, default=0.001, help='The learning rate (defaults to 0.001)')
parser.add_argument('--repr-dims', type=int, default=320, help='The representation dimension (defaults to 320)')
parser.add_argument('--max-train-length', type=int, default=3000, help='For sequence with a length greater than , it would be cropped into some sequences, each of which has a length less than (defaults to 3000)')
parser.add_argument('--iters', type=int, default=None, help='The number of iterations')
parser.add_argument('--epochs', type=int, default=None, help='The number of epochs')
parser.add_argument('--save-every', type=int, default=None, help='Save the checkpoint every iterations/epochs')
parser.add_argument('--seed', type=int, default=None, help='The random seed')
parser.add_argument('--max-threads', type=int, default=None, help='The maximum allowed number of threads used by this process')
parser.add_argument('--eval', action="store_true", help='Whether to perform evaluation after training')
parser.add_argument('--irregular', type=float, default=0, help='The ratio of missing observations (defaults to 0)')
args = parser.parse_args()
print("Dataset:", args.dataset)
print("Arguments:", str(args))
device = init_dl_program(args.gpu, seed=args.seed, max_threads=args.max_threads)
print('Loading data... ', end='')
if args.loader == 'UCR':
task_type = 'classification'
train_data, train_labels, test_data, test_labels = datautils.load_UCR(args.dataset)
elif args.loader == 'UEA':
task_type = 'classification'
train_data, train_labels, test_data, test_labels = datautils.load_UEA(args.dataset)
elif args.loader == 'forecast_csv':
task_type = 'forecasting'
data, train_slice, valid_slice, test_slice, scaler, pred_lens, n_covariate_cols = datautils.load_forecast_csv(args.dataset)
train_data = data[:, train_slice]
elif args.loader == 'forecast_csv_univar':
task_type = 'forecasting'
data, train_slice, valid_slice, test_slice, scaler, pred_lens, n_covariate_cols = datautils.load_forecast_csv(args.dataset, univar=True)
train_data = data[:, train_slice]
elif args.loader == 'forecast_npy':
task_type = 'forecasting'
data, train_slice, valid_slice, test_slice, scaler, pred_lens, n_covariate_cols = datautils.load_forecast_npy(args.dataset)
train_data = data[:, train_slice]
elif args.loader == 'forecast_npy_univar':
task_type = 'forecasting'
data, train_slice, valid_slice, test_slice, scaler, pred_lens, n_covariate_cols = datautils.load_forecast_npy(args.dataset, univar=True)
train_data = data[:, train_slice]
elif args.loader == 'anomaly':
task_type = 'anomaly_detection'
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
train_data = datautils.gen_ano_train_data(all_train_data)
elif args.loader == 'anomaly_coldstart':
task_type = 'anomaly_detection_coldstart'
all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay = datautils.load_anomaly(args.dataset)
train_data, _, _, _ = datautils.load_UCR('FordA')
else:
raise ValueError(f"Unknown loader {args.loader}.")
if args.irregular > 0:
if task_type == 'classification':
train_data = data_dropout(train_data, args.irregular)
test_data = data_dropout(test_data, args.irregular)
else:
raise ValueError(f"Task type {task_type} is not supported when irregular>0.")
print('done')
config = dict(
batch_size=args.batch_size,
lr=args.lr,
output_dims=args.repr_dims,
max_train_length=args.max_train_length
)
if args.save_every is not None:
unit = 'epoch' if args.epochs is not None else 'iter'
config[f'after_{unit}_callback'] = save_checkpoint_callback(args.save_every, unit)
run_dir = 'training/' + args.dataset + '__' + name_with_datetime(args.run_name)
os.makedirs(run_dir, exist_ok=True)
t = time.time()
model = TS2Vec(
input_dims=train_data.shape[-1],
device=device,
**config
)
loss_log = model.fit(
train_data,
n_epochs=args.epochs,
n_iters=args.iters,
verbose=True
)
model.save(f'{run_dir}/model.pkl')
t = time.time() - t
print(f"\nTraining time: {datetime.timedelta(seconds=t)}\n")
if args.eval:
if task_type == 'classification':
out, eval_res = tasks.eval_classification(model, train_data, train_labels, test_data, test_labels, eval_protocol='svm')
elif task_type == 'forecasting':
out, eval_res = tasks.eval_forecasting(model, data, train_slice, valid_slice, test_slice, scaler, pred_lens, n_covariate_cols)
elif task_type == 'anomaly_detection':
out, eval_res = tasks.eval_anomaly_detection(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay)
elif task_type == 'anomaly_detection_coldstart':
out, eval_res = tasks.eval_anomaly_detection_coldstart(model, all_train_data, all_train_labels, all_train_timestamps, all_test_data, all_test_labels, all_test_timestamps, delay)
else:
assert False
pkl_save(f'{run_dir}/out.pkl', out)
pkl_save(f'{run_dir}/eval_res.pkl', eval_res)
print('Evaluation result:', eval_res)
print("Finished.")
================================================
FILE: ts_classification_methods/ts2vec_cls/train_fcn.py
================================================
import argparse
import datetime
import os
import sys
import time
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import numpy as np
import torch
from torch.utils.data import DataLoader
from data.dataloader import UCRDataset
from data.preprocessing import normalize_per_series, fill_nan_value, normalize_train_val_test
from ts2vec_cls.ts2vec import TS2Vec
from ts2vec_cls.utils import init_dl_program, name_with_datetime
from tsm_utils import build_dataset, build_model, build_loss, evaluate, save_cls_result, get_all_datasets, set_seed
def save_checkpoint_callback(
save_every=1,
unit='epoch'
):
assert unit in ('epoch', 'iter')
def callback(model, loss):
n = model.n_epochs if unit == 'epoch' else model.n_iters
if n % save_every == 0:
model.save(f'{run_dir}/model_{n}.pkl')
return callback
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', type=str, default='Coffee', help='The dataset name')
parser.add_argument('--dataroot', type=str, default='/SSD/lz/UCRArchive_2018',
help='path of UCR folder') ## '/SSD/lz/UCRArchive_2018', None
parser.add_argument('--num_classes', type=int, default=0, help='number of class')
parser.add_argument('--task', type=str, default='classification', help='classification or reconstruction')
parser.add_argument('--classifier_input', type=int, default=128, help='input dim of the classifiers')
parser.add_argument('--loss', type=str, default='cross_entropy', help='loss function')
parser.add_argument('--weight_decay', type=float, default=0.0, help='weight decay')
parser.add_argument('--backbone', type=str, default='fcn', help='encoder backbone, fcn or dilated')
parser.add_argument('--classifier', type=str, default='nonlinear', help='type of classifier(linear or nonlinear)')
parser.add_argument('--run_name', default='UCR',
help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
parser.add_argument('--loader', type=str, default='UCR',
help='The data loader used to load the experimental data. This can be set to UCR, UEA, '
'forecast_csv, forecast_csv_univar, anomaly, or anomaly_coldstart')
parser.add_argument('--gpu', type=int, default=1,
help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch-size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--fcn_batch_size', type=int, default=128,
help='(16, 128) larger batch size on the big dataset, ') # 16
parser.add_argument('--lr', type=float, default=0.001, help='The learning rate (defaults to 0.001)')
parser.add_argument('--repr-dims', type=int, default=320, help='The representation dimension (defaults to 320)')
parser.add_argument('--max-train-length', type=int, default=3000,
help='For sequence with a length greater than , it would be cropped '
'into some sequences, each of which has a length less than (defaults to 3000)')
parser.add_argument('--iters', type=int, default=None, help='The number of iterations')
parser.add_argument('--epochs', type=int, default=None, help='The number of epochs')
parser.add_argument('--fcn_epoch', type=int, default=1000, help='fcn training epoch')
parser.add_argument('--save-every', type=int, default=None,
help='Save the checkpoint every iterations/epochs')
parser.add_argument('--seed', type=int, default=42, help='The random seed')
parser.add_argument('--random_seed', type=int, default=42, help='The random seed')
parser.add_argument('--max-threads', type=int, default=8,
help='The maximum allowed number of threads used by this process')
parser.add_argument('--eval', action="store_true", default=True,
help='Whether to perform evaluation after training')
parser.add_argument('--irregular', type=float, default=0, help='The ratio of missing observations (defaults to 0)')
parser.add_argument('--normalize_way', type=str, default='single', help='single or train_set')
parser.add_argument('--save_csv_name', type=str, default='ts2vec_test_fcncls_0404_')
parser.add_argument('--save_dir', type=str, default='/SSD/lz/time_tsm/ts2vec_cls/result')
parser.add_argument('--cuda', type=str, default='cuda:1')
args = parser.parse_args()
set_seed(args)
print("Dataset:", args.dataset)
print("Arguments:", str(args))
device = init_dl_program(args.gpu, seed=args.seed, max_threads=args.max_threads)
device_fcn = torch.device(args.cuda if torch.cuda.is_available() else "cpu")
print('Loading data... ', end='')
config = dict(
batch_size=args.batch_size,
lr=args.lr,
output_dims=args.repr_dims,
max_train_length=args.max_train_length
)
if args.save_every is not None:
unit = 'epoch' if args.epochs is not None else 'iter'
config[f'after_{unit}_callback'] = save_checkpoint_callback(args.save_every, unit)
run_dir = 'training/' + args.dataset + '__' + name_with_datetime(args.run_name)
os.makedirs(run_dir, exist_ok=True)
sum_dataset, sum_target, num_classes = build_dataset(args)
args.num_classes = num_classes
if sum_dataset.shape[0] < args.fcn_batch_size:
args.fcn_batch_size = 16
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
test_accuracies = []
end_val_epochs = []
train_time = 0.0
for i, train_dataset in enumerate(train_datasets):
print("\nStart K_fold = ", i)
train_labels = train_targets[i]
val_dataset = val_datasets[i]
val_labels = val_targets[i]
test_dataset = test_datasets[i]
test_labels = test_targets[i]
# mean impute for missing values in dataset
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
if args.normalize_way == 'single':
# TODO normalize per series
train_dataset = normalize_per_series(train_dataset)
val_dataset = normalize_per_series(val_dataset)
test_dataset = normalize_per_series(test_dataset)
else:
train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
test_dataset)
train_dataset = train_dataset[..., np.newaxis]
val_dataset = val_dataset[..., np.newaxis]
test_dataset = test_dataset[..., np.newaxis]
print("train_data.shape = ", train_dataset.shape)
t = time.time()
model = TS2Vec(
input_dims=train_dataset.shape[-1],
device=device,
**config
)
loss_log = model.fit(
train_dataset,
n_epochs=args.epochs,
n_iters=args.iters,
verbose=True
)
# model.save(f'{run_dir}/model.pkl')
train_repr = model.encode(train_dataset, encoding_window='full_series' if train_labels.ndim == 1 else None)
val_repr = model.encode(val_dataset, encoding_window='full_series' if val_labels.ndim == 1 else None)
test_repr = model.encode(test_dataset, encoding_window='full_series' if test_labels.ndim == 1 else None)
# accu = test_accu.cpu().numpy()
print("data info = ", train_repr.shape, test_repr.shape, train_dataset.shape, test_dataset.shape)
# print(type(train_repr), train_repr[:2])
model_fcn, classifier = build_model(args)
model_fcn, classifier = model_fcn.to(device_fcn), classifier.to(device_fcn)
loss = build_loss(args).to(device_fcn)
optimizer = torch.optim.Adam([{'params': model_fcn.parameters()}, {'params': classifier.parameters()}],
lr=args.lr, weight_decay=args.weight_decay)
train_set = UCRDataset(torch.from_numpy(train_repr).to(device_fcn),
torch.from_numpy(train_labels).to(device_fcn).to(torch.int64))
val_set = UCRDataset(torch.from_numpy(val_repr).to(device_fcn),
torch.from_numpy(val_labels).to(device_fcn).to(torch.int64))
test_set = UCRDataset(torch.from_numpy(test_repr).to(device_fcn),
torch.from_numpy(test_labels).to(device_fcn).to(torch.int64))
train_loader = DataLoader(train_set, batch_size=args.fcn_batch_size, num_workers=0, drop_last=True)
val_loader = DataLoader(val_set, batch_size=args.fcn_batch_size, num_workers=0)
test_loader = DataLoader(test_set, batch_size=args.fcn_batch_size, num_workers=0)
train_loss = []
train_accuracy = []
num_steps = args.fcn_epoch // args.batch_size
last_loss = float('inf')
stop_count = 0
increase_count = 0
test_accuracy = 0
min_val_loss = float('inf')
end_val_epoch = 0
num_steps = train_set.__len__() // args.batch_size
for epoch in range(args.fcn_epoch):
# early stopping in finetune
if stop_count == 50 or increase_count == 50:
print('model convergent at epoch {}, early stopping'.format(epoch))
break
epoch_train_loss = 0
epoch_train_acc = 0
model_fcn.train()
classifier.train()
for x, y in train_loader:
optimizer.zero_grad()
pred = model_fcn(x)
pred = classifier(pred)
step_loss = loss(pred, y)
step_loss.backward()
optimizer.step()
epoch_train_loss += step_loss.item()
epoch_train_acc += torch.sum(torch.argmax(pred.data, axis=1) == y) / len(y)
epoch_train_loss /= num_steps
epoch_train_acc /= num_steps
model_fcn.eval()
classifier.eval()
val_loss, val_accu = evaluate(val_loader, model_fcn, classifier, loss, device_fcn)
if min_val_loss > val_loss:
min_val_loss = val_loss
end_val_epoch = epoch
test_loss, test_accuracy = evaluate(test_loader, model_fcn, classifier, loss, device_fcn)
if epoch % 100 == 0:
print(
"epoch : {}, train loss: {} , train accuracy : {}, \nval loss : {}, val accuracy : {}, \ntest loss : {}, test accuracy : {}".format(
epoch, epoch_train_loss, epoch_train_acc, val_loss, val_accu, test_loss, test_accuracy))
if abs(last_loss - val_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if val_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = val_loss
# out, eval_res = tasks.eval_classification(model, train_dataset, train_labels, test_dataset, test_labels,
# eval_protocol='svm')
t = time.time() - t
print(f"\nTraining time: {datetime.timedelta(seconds=t)}\n")
train_time += t
# print('Evaluation result:', eval_res)
# test_accuracies.append(eval_res['acc'])
test_accuracies.append(test_accuracy)
end_val_epochs.append(end_val_epoch)
test_accuracies = torch.Tensor(test_accuracies)
end_val_epochs = np.array(end_val_epochs)
save_cls_result(args, test_accu=torch.mean(test_accuracies), test_std=torch.std(test_accuracies),
train_time=train_time / 5, end_val_epoch=np.mean(end_val_epochs))
print("Finished.")
================================================
FILE: ts_classification_methods/ts2vec_cls/train_tsm.py
================================================
import argparse
import datetime
import os
import sys
import time
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import numpy as np
import torch
from data.preprocessing import normalize_per_series, fill_nan_value, normalize_train_val_test
from ts2vec_cls import tasks
from ts2vec_cls.ts2vec import TS2Vec
from ts2vec_cls.utils import init_dl_program, name_with_datetime
from tsm_utils import build_dataset, save_cls_result, get_all_datasets, set_seed
def save_checkpoint_callback(
save_every=1,
unit='epoch'
):
assert unit in ('epoch', 'iter')
def callback(model, loss):
n = model.n_epochs if unit == 'epoch' else model.n_iters
if n % save_every == 0:
model.save(f'{run_dir}/model_{n}.pkl')
return callback
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', type=str, default='Coffee', help='The dataset name')
parser.add_argument('--dataroot', type=str, default='/SSD/lz/UCRArchive_2018',
help='path of UCR folder') ## '/SSD/lz/UCRArchive_2018', None
parser.add_argument('--run_name', default='UCR',
help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
parser.add_argument('--loader', type=str, default='UCR',
help='The data loader used to load the experimental data. This can be set to UCR, UEA, '
'forecast_csv, forecast_csv_univar, anomaly, or anomaly_coldstart')
parser.add_argument('--gpu', type=int, default=1,
help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch-size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--lr', type=float, default=0.001, help='The learning rate (defaults to 0.001)')
parser.add_argument('--repr-dims', type=int, default=320, help='The representation dimension (defaults to 320)')
parser.add_argument('--max-train-length', type=int, default=3000,
help='For sequence with a length greater than , it would be cropped '
'into some sequences, each of which has a length less than (defaults to 3000)')
parser.add_argument('--iters', type=int, default=None, help='The number of iterations')
parser.add_argument('--epochs', type=int, default=None, help='The number of epochs')
parser.add_argument('--save-every', type=int, default=None,
help='Save the checkpoint every iterations/epochs')
parser.add_argument('--seed', type=int, default=42, help='The random seed')
parser.add_argument('--random_seed', type=int, default=42, help='The random seed')
parser.add_argument('--max-threads', type=int, default=8,
help='The maximum allowed number of threads used by this process')
parser.add_argument('--eval', action="store_true", default=True,
help='Whether to perform evaluation after training')
parser.add_argument('--irregular', type=float, default=0, help='The ratio of missing observations (defaults to 0)')
parser.add_argument('--normalize_way', type=str, default='single', help='single or train_set')
parser.add_argument('--save_csv_name', type=str, default='ts2vec_test_cls_0409_')
parser.add_argument('--save_dir', type=str, default='/SSD/lz/time_tsm/ts2vec_cls/result')
args = parser.parse_args()
set_seed(args)
print("Dataset:", args.dataset)
print("Arguments:", str(args))
device = init_dl_program(args.gpu, seed=args.seed, max_threads=args.max_threads)
print('Loading data... ', end='')
config = dict(
batch_size=args.batch_size,
lr=args.lr,
output_dims=args.repr_dims,
max_train_length=args.max_train_length
)
if args.save_every is not None:
unit = 'epoch' if args.epochs is not None else 'iter'
config[f'after_{unit}_callback'] = save_checkpoint_callback(args.save_every, unit)
run_dir = 'training/' + args.dataset + '__' + name_with_datetime(args.run_name)
os.makedirs(run_dir, exist_ok=True)
sum_dataset, sum_target, num_classes = build_dataset(args)
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
test_accuracies = []
train_time = 0.0
for i, train_dataset in enumerate(train_datasets):
print("\nStart K_fold = ", i)
train_labels = train_targets[i]
val_dataset = val_datasets[i]
val_labels = val_targets[i]
test_dataset = test_datasets[i]
test_labels = test_targets[i]
# mean impute for missing values in dataset
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
if args.normalize_way == 'single':
# TODO normalize per series
train_dataset = normalize_per_series(train_dataset)
val_dataset = normalize_per_series(val_dataset)
test_dataset = normalize_per_series(test_dataset)
else:
train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
test_dataset)
train_dataset = train_dataset[..., np.newaxis]
val_dataset = val_dataset[..., np.newaxis]
test_dataset = test_dataset[..., np.newaxis]
# print(type(train_dataset))
train_val_dataset = np.concatenate((train_dataset, val_dataset))
train_val_labels = np.concatenate((train_labels, val_labels))
# print(train_labels.shape, val_labels.shape)
# print("train, val train_val_data.shape = ", train_dataset.shape, val_dataset.shape, train_val_dataset.shape, train_val_labels.shape)
t = time.time()
model = TS2Vec(
input_dims=train_dataset.shape[-1],
device=device,
**config
)
loss_log = model.fit(
train_dataset,
n_epochs=args.epochs,
n_iters=args.iters,
verbose=True
)
# model.save(f'{run_dir}/model.pkl')
## evalution on test_dataset,
out, eval_res = tasks.eval_classification(model, train_val_dataset, train_val_labels, test_dataset, test_labels,
eval_protocol='svm')
t = time.time() - t
print(f"\nTraining time: {datetime.timedelta(seconds=t)}\n")
train_time += t
print('Evaluation result:', eval_res)
test_accuracies.append(eval_res['acc'])
test_accuracies = torch.Tensor(test_accuracies)
save_cls_result(args, test_accu=torch.mean(test_accuracies), test_std=torch.std(test_accuracies),
train_time=train_time / 5, end_val_epoch=0.00, seeds=args.random_seed)
print("Finished.")
================================================
FILE: ts_classification_methods/ts2vec_cls/train_tsm_uea.py
================================================
import argparse
import datetime
import os
import sys
import time
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import numpy as np
import torch
from data.preprocessing import normalize_train_val_test
from data.preprocessing import load_UEA, fill_nan_value, normalize_uea_set
from ts2vec_cls import tasks
from ts2vec_cls.ts2vec import TS2Vec
from ts2vec_cls.utils import init_dl_program, name_with_datetime
from tsm_utils import save_cls_result, get_all_datasets, set_seed
def save_checkpoint_callback(
save_every=1,
unit='epoch'
):
assert unit in ('epoch', 'iter')
def callback(model, loss):
n = model.n_epochs if unit == 'epoch' else model.n_iters
if n % save_every == 0:
model.save(f'{run_dir}/model_{n}.pkl')
return callback
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', type=str, default='BasicMotions', help='The dataset name')
parser.add_argument('--dataroot', type=str, default='/SSD/lz/Multivariate2018_arff',
help='path of UEA folder')
parser.add_argument('--run_name', default='UCR',
help='The folder name used to save model, output and evaluation metrics. This can be set to any word')
parser.add_argument('--loader', type=str, default='UCR',
help='The data loader used to load the experimental data. This can be set to UCR, UEA, '
'forecast_csv, forecast_csv_univar, anomaly, or anomaly_coldstart')
parser.add_argument('--gpu', type=int, default=1,
help='The gpu no. used for training and inference (defaults to 0)')
parser.add_argument('--batch-size', type=int, default=8, help='The batch size (defaults to 8)')
parser.add_argument('--lr', type=float, default=0.001, help='The learning rate (defaults to 0.001)')
parser.add_argument('--repr-dims', type=int, default=320, help='The representation dimension (defaults to 320)')
parser.add_argument('--max-train-length', type=int, default=3000,
help='For sequence with a length greater than , it would be cropped '
'into some sequences, each of which has a length less than (defaults to 3000)')
parser.add_argument('--iters', type=int, default=None, help='The number of iterations')
parser.add_argument('--epochs', type=int, default=None, help='The number of epochs')
parser.add_argument('--save-every', type=int, default=None,
help='Save the checkpoint every iterations/epochs')
parser.add_argument('--seed', type=int, default=42, help='The random seed')
parser.add_argument('--random_seed', type=int, default=42, help='The random seed')
parser.add_argument('--max-threads', type=int, default=8,
help='The maximum allowed number of threads used by this process')
parser.add_argument('--eval', action="store_true", default=True,
help='Whether to perform evaluation after training')
parser.add_argument('--irregular', type=float, default=0, help='The ratio of missing observations (defaults to 0)')
parser.add_argument('--normalize_way', type=str, default='single', help='single or train_set')
parser.add_argument('--save_csv_name', type=str, default='ts2vec_test_uea_0423_')
parser.add_argument('--save_dir', type=str, default='/SSD/lz/time_tsm/ts2vec_cls/result')
args = parser.parse_args()
set_seed(args)
print("Dataset:", args.dataset)
print("Arguments:", str(args))
device = init_dl_program(args.gpu, seed=args.seed, max_threads=args.max_threads)
print('Loading data... ', end='')
config = dict(
batch_size=args.batch_size,
lr=args.lr,
output_dims=args.repr_dims,
max_train_length=args.max_train_length
)
if args.save_every is not None:
unit = 'epoch' if args.epochs is not None else 'iter'
config[f'after_{unit}_callback'] = save_checkpoint_callback(args.save_every, unit)
run_dir = 'training/' + args.dataset + '__' + name_with_datetime(args.run_name)
os.makedirs(run_dir, exist_ok=True)
# sum_dataset, sum_target, num_classes = build_dataset(args)
sum_dataset, sum_target, num_classes = load_UEA(
dataroot=args.dataroot,
dataset=args.dataset)
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = get_all_datasets(
sum_dataset, sum_target)
test_accuracies = []
train_time = 0.0
for i, train_dataset in enumerate(train_datasets):
print("\nStart K_fold = ", i)
train_labels = train_targets[i]
val_dataset = val_datasets[i]
val_labels = val_targets[i]
test_dataset = test_datasets[i]
test_labels = test_targets[i]
# mean impute for missing values in dataset
train_dataset, val_dataset, test_dataset = fill_nan_value(train_dataset, val_dataset, test_dataset)
if args.normalize_way == 'single':
train_dataset = normalize_uea_set(train_dataset)
val_dataset = normalize_uea_set(val_dataset)
test_dataset = normalize_uea_set(test_dataset)
else:
train_dataset, val_dataset, test_dataset = normalize_train_val_test(train_dataset, val_dataset,
test_dataset)
# train_dataset = train_dataset[..., np.newaxis]
# val_dataset = val_dataset[..., np.newaxis]
# test_dataset = test_dataset[..., np.newaxis]
# print(type(train_dataset))
train_val_dataset = np.concatenate((train_dataset, val_dataset))
train_val_labels = np.concatenate((train_labels, val_labels))
# print(train_labels.shape, val_labels.shape)
# print("train, val train_val_data.shape = ", train_dataset.shape, val_dataset.shape, train_val_dataset.shape, train_val_labels.shape)
t = time.time()
model = TS2Vec(
input_dims=train_dataset.shape[-1],
device=device,
**config
)
loss_log = model.fit(
train_dataset,
n_epochs=args.epochs,
n_iters=args.iters,
verbose=True
)
# model.save(f'{run_dir}/model.pkl')
## evalution on test_dataset,
out, eval_res = tasks.eval_classification(model, train_val_dataset, train_val_labels, test_dataset, test_labels,
eval_protocol='svm')
t = time.time() - t
print(f"\nTraining time: {datetime.timedelta(seconds=t)}\n")
train_time += t
print('Evaluation result:', eval_res)
test_accuracies.append(eval_res['acc'])
test_accuracies = torch.Tensor(test_accuracies)
save_cls_result(args, test_accu=torch.mean(test_accuracies), test_std=torch.std(test_accuracies),
train_time=train_time / 5, end_val_epoch=0.00, seeds=args.random_seed)
print("Finished.")
================================================
FILE: ts_classification_methods/ts2vec_cls/ts2vec.py
================================================
import torch
import torch.nn.functional as F
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
from models import TSEncoder
from models.losses import hierarchical_contrastive_loss
from utils import take_per_row, split_with_nan, centerize_vary_length_series, torch_pad_nan
import math
class TS2Vec:
'''The TS2Vec model'''
def __init__(
self,
input_dims,
output_dims=320,
hidden_dims=64,
depth=10,
device='cuda',
lr=0.001,
batch_size=16,
max_train_length=None,
temporal_unit=0,
after_iter_callback=None,
after_epoch_callback=None
):
''' Initialize a TS2Vec model.
Args:
input_dims (int): The input dimension. For a univariate time series, this should be set to 1.
output_dims (int): The representation dimension.
hidden_dims (int): The hidden dimension of the encoder.
depth (int): The number of hidden residual blocks in the encoder.
device (int): The gpu used for training and inference.
lr (int): The learning rate.
batch_size (int): The batch size.
max_train_length (Union[int, NoneType]): The maximum allowed sequence length for training. For sequence with a length greater than , it would be cropped into some sequences, each of which has a length less than .
temporal_unit (int): The minimum unit to perform temporal contrast. When training on a very long sequence, this param helps to reduce the cost of time and memory.
after_iter_callback (Union[Callable, NoneType]): A callback function that would be called after each iteration.
after_epoch_callback (Union[Callable, NoneType]): A callback function that would be called after each epoch.
'''
super().__init__()
self.device = device
self.lr = lr
self.batch_size = batch_size
self.max_train_length = max_train_length
self.temporal_unit = temporal_unit
self._net = TSEncoder(input_dims=input_dims, output_dims=output_dims, hidden_dims=hidden_dims, depth=depth).to(self.device)
self.net = torch.optim.swa_utils.AveragedModel(self._net)
self.net.update_parameters(self._net)
self.after_iter_callback = after_iter_callback
self.after_epoch_callback = after_epoch_callback
self.n_epochs = 0
self.n_iters = 0
def fit(self, train_data, n_epochs=None, n_iters=None, verbose=False):
''' Training the TS2Vec model.
Args:
train_data (numpy.ndarray): The training data. It should have a shape of (n_instance, n_timestamps, n_features). All missing data should be set to NaN.
n_epochs (Union[int, NoneType]): The number of epochs. When this reaches, the training stops.
n_iters (Union[int, NoneType]): The number of iterations. When this reaches, the training stops. If both n_epochs and n_iters are not specified, a default setting would be used that sets n_iters to 200 for a dataset with size <= 100000, 600 otherwise.
verbose (bool): Whether to print the training loss after each epoch.
Returns:
loss_log: a list containing the training losses on each epoch.
'''
assert train_data.ndim == 3
if n_iters is None and n_epochs is None:
n_iters = 200 if train_data.size <= 100000 else 600 # default param for n_iters
if self.max_train_length is not None:
sections = train_data.shape[1] // self.max_train_length
if sections >= 2:
train_data = np.concatenate(split_with_nan(train_data, sections, axis=1), axis=0)
temporal_missing = np.isnan(train_data).all(axis=-1).any(axis=0)
if temporal_missing[0] or temporal_missing[-1]:
train_data = centerize_vary_length_series(train_data)
train_data = train_data[~np.isnan(train_data).all(axis=2).all(axis=1)]
train_dataset = TensorDataset(torch.from_numpy(train_data).to(torch.float))
train_loader = DataLoader(train_dataset, batch_size=min(self.batch_size, len(train_dataset)), shuffle=True, drop_last=True)
optimizer = torch.optim.AdamW(self._net.parameters(), lr=self.lr)
loss_log = []
while True:
if n_epochs is not None and self.n_epochs >= n_epochs:
break
cum_loss = 0
n_epoch_iters = 0
interrupted = False
for batch in train_loader:
# print("self.n_iters = ", self.n_iters, ", n_iters = ", n_iters)
if n_iters is not None and self.n_iters >= n_iters:
interrupted = True
break
x = batch[0]
if self.max_train_length is not None and x.size(1) > self.max_train_length:
window_offset = np.random.randint(x.size(1) - self.max_train_length + 1)
x = x[:, window_offset : window_offset + self.max_train_length]
x = x.to(self.device)
ts_l = x.size(1)
crop_l = np.random.randint(low=2 ** (self.temporal_unit + 1), high=ts_l+1)
crop_left = np.random.randint(ts_l - crop_l + 1)
crop_right = crop_left + crop_l
crop_eleft = np.random.randint(crop_left + 1)
crop_eright = np.random.randint(low=crop_right, high=ts_l + 1)
crop_offset = np.random.randint(low=-crop_eleft, high=ts_l - crop_eright + 1, size=x.size(0))
optimizer.zero_grad()
out1 = self._net(take_per_row(x, crop_offset + crop_eleft, crop_right - crop_eleft))
out1 = out1[:, -crop_l:]
out2 = self._net(take_per_row(x, crop_offset + crop_left, crop_eright - crop_left))
out2 = out2[:, :crop_l]
loss = hierarchical_contrastive_loss(
out1,
out2,
temporal_unit=self.temporal_unit
)
loss.backward()
optimizer.step()
self.net.update_parameters(self._net)
cum_loss += loss.item()
n_epoch_iters += 1
self.n_iters += 1
if self.after_iter_callback is not None:
self.after_iter_callback(self, loss.item())
if interrupted:
break
cum_loss /= n_epoch_iters
loss_log.append(cum_loss)
if verbose:
print(f"Epoch #{self.n_epochs}: loss={cum_loss}")
self.n_epochs += 1
if self.after_epoch_callback is not None:
self.after_epoch_callback(self, cum_loss)
return loss_log
def _eval_with_pooling(self, x, mask=None, slicing=None, encoding_window=None):
out = self.net(x.to(self.device, non_blocking=True), mask)
if encoding_window == 'full_series':
if slicing is not None:
out = out[:, slicing]
out = F.max_pool1d(
out.transpose(1, 2),
kernel_size = out.size(1),
).transpose(1, 2)
elif isinstance(encoding_window, int):
out = F.max_pool1d(
out.transpose(1, 2),
kernel_size = encoding_window,
stride = 1,
padding = encoding_window // 2
).transpose(1, 2)
if encoding_window % 2 == 0:
out = out[:, :-1]
if slicing is not None:
out = out[:, slicing]
elif encoding_window == 'multiscale':
p = 0
reprs = []
while (1 << p) + 1 < out.size(1):
t_out = F.max_pool1d(
out.transpose(1, 2),
kernel_size = (1 << (p + 1)) + 1,
stride = 1,
padding = 1 << p
).transpose(1, 2)
if slicing is not None:
t_out = t_out[:, slicing]
reprs.append(t_out)
p += 1
out = torch.cat(reprs, dim=-1)
else:
if slicing is not None:
out = out[:, slicing]
return out.cpu()
def encode(self, data, mask=None, encoding_window=None, casual=False, sliding_length=None, sliding_padding=0, batch_size=None):
''' Compute representations using the model.
Args:
data (numpy.ndarray): This should have a shape of (n_instance, n_timestamps, n_features). All missing data should be set to NaN.
mask (str): The mask used by encoder can be specified with this parameter. This can be set to 'binomial', 'continuous', 'all_true', 'all_false' or 'mask_last'.
encoding_window (Union[str, int]): When this param is specified, the computed representation would the max pooling over this window. This can be set to 'full_series', 'multiscale' or an integer specifying the pooling kernel size.
casual (bool): When this param is set to True, the future informations would not be encoded into representation of each timestamp.
sliding_length (Union[int, NoneType]): The length of sliding window. When this param is specified, a sliding inference would be applied on the time series.
sliding_padding (int): This param specifies the contextual data length used for inference every sliding windows.
batch_size (Union[int, NoneType]): The batch size used for inference. If not specified, this would be the same batch size as training.
Returns:
repr: The representations for data.
'''
assert self.net is not None, 'please train or load a net first'
assert data.ndim == 3
if batch_size is None:
batch_size = self.batch_size
n_samples, ts_l, _ = data.shape
org_training = self.net.training
self.net.eval()
dataset = TensorDataset(torch.from_numpy(data).to(torch.float))
loader = DataLoader(dataset, batch_size=batch_size)
with torch.no_grad():
output = []
for batch in loader:
x = batch[0]
if sliding_length is not None:
reprs = []
if n_samples < batch_size:
calc_buffer = []
calc_buffer_l = 0
for i in range(0, ts_l, sliding_length):
l = i - sliding_padding
r = i + sliding_length + (sliding_padding if not casual else 0)
x_sliding = torch_pad_nan(
x[:, max(l, 0) : min(r, ts_l)],
left=-l if l<0 else 0,
right=r-ts_l if r>ts_l else 0,
dim=1
)
if n_samples < batch_size:
if calc_buffer_l + n_samples > batch_size:
out = self._eval_with_pooling(
torch.cat(calc_buffer, dim=0),
mask,
slicing=slice(sliding_padding, sliding_padding+sliding_length),
encoding_window=encoding_window
)
reprs += torch.split(out, n_samples)
calc_buffer = []
calc_buffer_l = 0
calc_buffer.append(x_sliding)
calc_buffer_l += n_samples
else:
out = self._eval_with_pooling(
x_sliding,
mask,
slicing=slice(sliding_padding, sliding_padding+sliding_length),
encoding_window=encoding_window
)
reprs.append(out)
if n_samples < batch_size:
if calc_buffer_l > 0:
out = self._eval_with_pooling(
torch.cat(calc_buffer, dim=0),
mask,
slicing=slice(sliding_padding, sliding_padding+sliding_length),
encoding_window=encoding_window
)
reprs += torch.split(out, n_samples)
calc_buffer = []
calc_buffer_l = 0
out = torch.cat(reprs, dim=1)
if encoding_window == 'full_series':
out = F.max_pool1d(
out.transpose(1, 2).contiguous(),
kernel_size = out.size(1),
).squeeze(1)
else:
out = self._eval_with_pooling(x, mask, encoding_window=encoding_window)
if encoding_window == 'full_series':
out = out.squeeze(1)
output.append(out)
output = torch.cat(output, dim=0)
self.net.train(org_training)
return output.numpy()
def save(self, fn):
''' Save the model to a file.
Args:
fn (str): filename.
'''
torch.save(self.net.state_dict(), fn)
def load(self, fn):
''' Load the model from a file.
Args:
fn (str): filename.
'''
state_dict = torch.load(fn, map_location=self.device)
self.net.load_state_dict(state_dict)
================================================
FILE: ts_classification_methods/ts2vec_cls/utils.py
================================================
import os
import numpy as np
import pickle
import torch
import random
from datetime import datetime
def pkl_save(name, var):
with open(name, 'wb') as f:
pickle.dump(var, f)
def pkl_load(name):
with open(name, 'rb') as f:
return pickle.load(f)
def torch_pad_nan(arr, left=0, right=0, dim=0):
if left > 0:
padshape = list(arr.shape)
padshape[dim] = left
arr = torch.cat((torch.full(padshape, np.nan), arr), dim=dim)
if right > 0:
padshape = list(arr.shape)
padshape[dim] = right
arr = torch.cat((arr, torch.full(padshape, np.nan)), dim=dim)
return arr
def pad_nan_to_target(array, target_length, axis=0, both_side=False):
assert array.dtype in [np.float16, np.float32, np.float64]
pad_size = target_length - array.shape[axis]
if pad_size <= 0:
return array
npad = [(0, 0)] * array.ndim
if both_side:
npad[axis] = (pad_size // 2, pad_size - pad_size//2)
else:
npad[axis] = (0, pad_size)
return np.pad(array, pad_width=npad, mode='constant', constant_values=np.nan)
def split_with_nan(x, sections, axis=0):
assert x.dtype in [np.float16, np.float32, np.float64]
arrs = np.array_split(x, sections, axis=axis)
target_length = arrs[0].shape[axis]
for i in range(len(arrs)):
arrs[i] = pad_nan_to_target(arrs[i], target_length, axis=axis)
return arrs
def take_per_row(A, indx, num_elem):
all_indx = indx[:,None] + np.arange(num_elem)
return A[torch.arange(all_indx.shape[0])[:,None], all_indx]
def centerize_vary_length_series(x):
prefix_zeros = np.argmax(~np.isnan(x).all(axis=-1), axis=1)
suffix_zeros = np.argmax(~np.isnan(x[:, ::-1]).all(axis=-1), axis=1)
offset = (prefix_zeros + suffix_zeros) // 2 - prefix_zeros
rows, column_indices = np.ogrid[:x.shape[0], :x.shape[1]]
offset[offset < 0] += x.shape[1]
column_indices = column_indices - offset[:, np.newaxis]
return x[rows, column_indices]
def data_dropout(arr, p):
B, T = arr.shape[0], arr.shape[1]
mask = np.full(B*T, False, dtype=np.bool)
ele_sel = np.random.choice(
B*T,
size=int(B*T*p),
replace=False
)
mask[ele_sel] = True
res = arr.copy()
res[mask.reshape(B, T)] = np.nan
return res
def name_with_datetime(prefix='default'):
now = datetime.now()
return prefix + '_' + now.strftime("%Y%m%d_%H%M%S")
def init_dl_program(
device_name,
seed=None,
use_cudnn=True,
deterministic=False,
benchmark=False,
use_tf32=False,
max_threads=None
):
import torch
if max_threads is not None:
torch.set_num_threads(max_threads) # intraop
if torch.get_num_interop_threads() != max_threads:
torch.set_num_interop_threads(max_threads) # interop
try:
import mkl
except:
pass
else:
mkl.set_num_threads(max_threads)
if seed is not None:
random.seed(seed)
seed += 1
np.random.seed(seed)
seed += 1
torch.manual_seed(seed)
if isinstance(device_name, (str, int)):
device_name = [device_name]
devices = []
for t in reversed(device_name):
t_device = torch.device(t)
devices.append(t_device)
if t_device.type == 'cuda':
assert torch.cuda.is_available()
torch.cuda.set_device(t_device)
if seed is not None:
seed += 1
torch.cuda.manual_seed(seed)
devices.reverse()
torch.backends.cudnn.enabled = use_cudnn
torch.backends.cudnn.deterministic = deterministic
torch.backends.cudnn.benchmark = benchmark
if hasattr(torch.backends.cudnn, 'allow_tf32'):
torch.backends.cudnn.allow_tf32 = use_tf32
torch.backends.cuda.matmul.allow_tf32 = use_tf32
return devices if len(devices) > 1 else devices[0]
================================================
FILE: ts_classification_methods/tsm_utils.py
================================================
import os
import random
import numpy as np
import pandas as pd
import torch
import torch.optim
from data.preprocessing import load_data, k_fold, transfer_labels
from model.loss import cross_entropy, reconstruction_loss
from model.tsm_model import FCN, DilatedConvolution, Classifier, NonLinearClassifier, RNNDecoder, FCNDecoder
def set_seed(args):
random.seed(args.random_seed)
np.random.seed(args.random_seed)
torch.manual_seed(args.random_seed)
torch.cuda.manual_seed(args.random_seed)
torch.cuda.manual_seed_all(args.random_seed)
def build_model(args):
if args.backbone == 'fcn':
model = FCN(args.num_classes, args.input_size)
elif args.backbone == 'dilated':
model = DilatedConvolution(args.in_channels, args.embedding_channels,
args.out_channels, args.depth, args.reduced_size, args.kernel_size, args.num_classes)
if args.task == 'classification':
if args.classifier == 'nonlinear':
classifier = NonLinearClassifier(args.classifier_input, 128, args.num_classes)
elif args.classifier == 'linear':
classifier = Classifier(args.classifier_input, args.num_classes)
elif args.task == 'reconstruction':
if args.decoder_backbone == 'rnn':
classifier = RNNDecoder(input_dim=args.input_size)
if args.decoder_backbone == 'fcn':
classifier = FCNDecoder(num_classes=args.num_classes, seq_len=args.seq_len, input_size=args.input_size)
return model, classifier
def build_dataset(args):
sum_dataset, sum_target, num_classes = load_data(args.dataroot, args.dataset)
sum_target = transfer_labels(sum_target)
return sum_dataset, sum_target, num_classes
def build_loss(args):
if args.loss == 'cross_entropy':
return cross_entropy()
elif args.loss == 'reconstruction':
return reconstruction_loss()
def build_optimizer(args):
if args.optimizer == 'adam':
return torch.optim.Adam(lr=args.lr, weight_decay=args.weight_decay)
elif args.optimizer == 'sgd':
return torch.optim.SGD(lr=args.lr, weight_decay=args.weight_decay)
def evaluate(val_loader, model, classifier, loss, device):
val_loss = 0
val_accu = 0
sum_len = 0
for data, target in val_loader:
'''
data, target = data.to(device), target.to(device)
target = target.to(torch.int64)
'''
with torch.no_grad():
val_pred = model(data)
val_pred = classifier(val_pred)
val_loss += loss(val_pred, target).item()
val_accu += torch.sum(torch.argmax(val_pred.data, axis=1) == target)
sum_len += len(target)
return val_loss / sum_len, val_accu / sum_len
def save_finetune_result(args, accu, std):
save_path = os.path.join(args.save_dir, args.source_dataset, 'finetune_result.csv')
# save_path = os.path.join(args.save_dir, 'finetune_result.csv')
accu = accu.cpu().numpy()
std = std.cpu().numpy()
if os.path.exists(save_path):
result_form = pd.read_csv(save_path)
else:
result_form = pd.DataFrame(columns=['dataset', 'accuracy', 'std'])
result_form = result_form.append({'dataset': args.dataset, 'accuracy': '%.4f' % accu, 'std': '%.4f' % std},
ignore_index=True)
result_form = result_form.iloc[:, -3:]
result_form.to_csv(save_path)
def save_cls_result(args, test_accu, test_std, train_time, end_val_epoch, seeds=42):
save_path = os.path.join(args.save_dir, '', args.save_csv_name + 'cls_result.csv')
accu = test_accu.cpu().numpy()
std = test_std.cpu().numpy()
if os.path.exists(save_path):
result_form = pd.read_csv(save_path, index_col=0)
else:
result_form = pd.DataFrame(
columns=['dataset_name', 'test_accuracy', 'test_std', 'train_time', 'end_val_epoch', 'seeds'])
result_form = result_form.append(
{'dataset_name': args.dataset, 'test_accuracy': '%.4f' % accu, 'test_std': '%.4f' % std,
'train_time': '%.4f' % train_time, 'end_val_epoch': '%.2f' % end_val_epoch,
'seeds': '%d' % seeds}, ignore_index=True)
result_form.to_csv(save_path, index=True, index_label="id")
def get_all_datasets(data, target):
return k_fold(data, target)
================================================
FILE: ts_classification_methods/tst_cls/scripts/classification.sh
================================================
python src/main.py --dataset AllGestureWiimoteY --data_dir /dev_data/zzj/hzy/datasets/UCR/AllGestureWiimoteY --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset UWaveGestureLibraryX --data_dir /dev_data/zzj/hzy/datasets/UCR/UWaveGestureLibraryX --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset DiatomSizeReduction --data_dir /dev_data/zzj/hzy/datasets/UCR/DiatomSizeReduction --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset FreezerSmallTrain --data_dir /dev_data/zzj/hzy/datasets/UCR/FreezerSmallTrain --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ScreenType --data_dir /dev_data/zzj/hzy/datasets/UCR/ScreenType --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset MixedShapesSmallTrain --data_dir /dev_data/zzj/hzy/datasets/UCR/MixedShapesSmallTrain --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset SonyAIBORobotSurface2 --data_dir /dev_data/zzj/hzy/datasets/UCR/SonyAIBORobotSurface2 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset LargeKitchenAppliances --data_dir /dev_data/zzj/hzy/datasets/UCR/LargeKitchenAppliances --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ProximalPhalanxOutlineCorrect --data_dir /dev_data/zzj/hzy/datasets/UCR/ProximalPhalanxOutlineCorrect --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset OSULeaf --data_dir /dev_data/zzj/hzy/datasets/UCR/OSULeaf --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset OliveOil --data_dir /dev_data/zzj/hzy/datasets/UCR/OliveOil --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset FreezerRegularTrain --data_dir /dev_data/zzj/hzy/datasets/UCR/FreezerRegularTrain --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Herring --data_dir /dev_data/zzj/hzy/datasets/UCR/Herring --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset GesturePebbleZ1 --data_dir /dev_data/zzj/hzy/datasets/UCR/GesturePebbleZ1 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset MelbournePedestrian --data_dir /dev_data/zzj/hzy/datasets/UCR/MelbournePedestrian --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset PhalangesOutlinesCorrect --data_dir /dev_data/zzj/hzy/datasets/UCR/PhalangesOutlinesCorrect --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset CricketZ --data_dir /dev_data/zzj/hzy/datasets/UCR/CricketZ --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ACSF1 --data_dir /dev_data/zzj/hzy/datasets/UCR/ACSF1 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset FaceFour --data_dir /dev_data/zzj/hzy/datasets/UCR/FaceFour --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset SemgHandGenderCh2 --data_dir /dev_data/zzj/hzy/datasets/UCR/SemgHandGenderCh2 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Haptics --data_dir /dev_data/zzj/hzy/datasets/UCR/Haptics --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset UWaveGestureLibraryY --data_dir /dev_data/zzj/hzy/datasets/UCR/UWaveGestureLibraryY --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Coffee --data_dir /dev_data/zzj/hzy/datasets/UCR/Coffee --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset TwoLeadECG --data_dir /dev_data/zzj/hzy/datasets/UCR/TwoLeadECG --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset DistalPhalanxOutlineAgeGroup --data_dir /dev_data/zzj/hzy/datasets/UCR/DistalPhalanxOutlineAgeGroup --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset MixedShapesRegularTrain --data_dir /dev_data/zzj/hzy/datasets/UCR/MixedShapesRegularTrain --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset SmoothSubspace --data_dir /dev_data/zzj/hzy/datasets/UCR/SmoothSubspace --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Meat --data_dir /dev_data/zzj/hzy/datasets/UCR/Meat --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ShapesAll --data_dir /dev_data/zzj/hzy/datasets/UCR/ShapesAll --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset InsectEPGSmallTrain --data_dir /dev_data/zzj/hzy/datasets/UCR/InsectEPGSmallTrain --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset CinCECGTorso --data_dir /dev_data/zzj/hzy/datasets/UCR/CinCECGTorso --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset BeetleFly --data_dir /dev_data/zzj/hzy/datasets/UCR/BeetleFly --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Ham --data_dir /dev_data/zzj/hzy/datasets/UCR/Ham --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ProximalPhalanxTW --data_dir /dev_data/zzj/hzy/datasets/UCR/ProximalPhalanxTW --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ItalyPowerDemand --data_dir /dev_data/zzj/hzy/datasets/UCR/ItalyPowerDemand --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset GunPointMaleVersusFemale --data_dir /dev_data/zzj/hzy/datasets/UCR/GunPointMaleVersusFemale --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset SonyAIBORobotSurface1 --data_dir /dev_data/zzj/hzy/datasets/UCR/SonyAIBORobotSurface1 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset MedicalImages --data_dir /dev_data/zzj/hzy/datasets/UCR/MedicalImages --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset SmallKitchenAppliances --data_dir /dev_data/zzj/hzy/datasets/UCR/SmallKitchenAppliances --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset PigCVP --data_dir /dev_data/zzj/hzy/datasets/UCR/PigCVP --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Crop --data_dir /dev_data/zzj/hzy/datasets/UCR/Crop --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Chinatown --data_dir /dev_data/zzj/hzy/datasets/UCR/Chinatown --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset PLAID --data_dir /dev_data/zzj/hzy/datasets/UCR/PLAID --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset RefrigerationDevices --data_dir /dev_data/zzj/hzy/datasets/UCR/RefrigerationDevices --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Wine --data_dir /dev_data/zzj/hzy/datasets/UCR/Wine --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Yoga --data_dir /dev_data/zzj/hzy/datasets/UCR/Yoga --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset AllGestureWiimoteX --data_dir /dev_data/zzj/hzy/datasets/UCR/AllGestureWiimoteX --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset DistalPhalanxTW --data_dir /dev_data/zzj/hzy/datasets/UCR/DistalPhalanxTW --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Computers --data_dir /dev_data/zzj/hzy/datasets/UCR/Computers --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ElectricDevices --data_dir /dev_data/zzj/hzy/datasets/UCR/ElectricDevices --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Adiac --data_dir /dev_data/zzj/hzy/datasets/UCR/Adiac --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset InlineSkate --data_dir /dev_data/zzj/hzy/datasets/UCR/InlineSkate --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset FacesUCR --data_dir /dev_data/zzj/hzy/datasets/UCR/FacesUCR --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ShapeletSim --data_dir /dev_data/zzj/hzy/datasets/UCR/ShapeletSim --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset GunPointAgeSpan --data_dir /dev_data/zzj/hzy/datasets/UCR/GunPointAgeSpan --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Phoneme --data_dir /dev_data/zzj/hzy/datasets/UCR/Phoneme --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset CricketX --data_dir /dev_data/zzj/hzy/datasets/UCR/CricketX --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Lightning2 --data_dir /dev_data/zzj/hzy/datasets/UCR/Lightning2 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Beef --data_dir /dev_data/zzj/hzy/datasets/UCR/Beef --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset PowerCons --data_dir /dev_data/zzj/hzy/datasets/UCR/PowerCons --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Plane --data_dir /dev_data/zzj/hzy/datasets/UCR/Plane --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset NonInvasiveFetalECGThorax2 --data_dir /dev_data/zzj/hzy/datasets/UCR/NonInvasiveFetalECGThorax2 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset UMD --data_dir /dev_data/zzj/hzy/datasets/UCR/UMD --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Wafer --data_dir /dev_data/zzj/hzy/datasets/UCR/Wafer --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ToeSegmentation1 --data_dir /dev_data/zzj/hzy/datasets/UCR/ToeSegmentation1 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Car --data_dir /dev_data/zzj/hzy/datasets/UCR/Car --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset UWaveGestureLibraryZ --data_dir /dev_data/zzj/hzy/datasets/UCR/UWaveGestureLibraryZ --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset EOGVerticalSignal --data_dir /dev_data/zzj/hzy/datasets/UCR/EOGVerticalSignal --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset CBF --data_dir /dev_data/zzj/hzy/datasets/UCR/CBF --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset EOGHorizontalSignal --data_dir /dev_data/zzj/hzy/datasets/UCR/EOGHorizontalSignal --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Strawberry --data_dir /dev_data/zzj/hzy/datasets/UCR/Strawberry --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset StarLightCurves --data_dir /dev_data/zzj/hzy/datasets/UCR/StarLightCurves --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset DodgerLoopGame --data_dir /dev_data/zzj/hzy/datasets/UCR/DodgerLoopGame --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset FordA --data_dir /dev_data/zzj/hzy/datasets/UCR/FordA --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Fish --data_dir /dev_data/zzj/hzy/datasets/UCR/Fish --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset PigArtPressure --data_dir /dev_data/zzj/hzy/datasets/UCR/PigArtPressure --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ShakeGestureWiimoteZ --data_dir /dev_data/zzj/hzy/datasets/UCR/ShakeGestureWiimoteZ --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ECGFiveDays --data_dir /dev_data/zzj/hzy/datasets/UCR/ECGFiveDays --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset GunPointOldVersusYoung --data_dir /dev_data/zzj/hzy/datasets/UCR/GunPointOldVersusYoung --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset GesturePebbleZ2 --data_dir /dev_data/zzj/hzy/datasets/UCR/GesturePebbleZ2 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ECG200 --data_dir /dev_data/zzj/hzy/datasets/UCR/ECG200 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Symbols --data_dir /dev_data/zzj/hzy/datasets/UCR/Symbols --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset FordB --data_dir /dev_data/zzj/hzy/datasets/UCR/FordB --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset FaceAll --data_dir /dev_data/zzj/hzy/datasets/UCR/FaceAll --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset MiddlePhalanxTW --data_dir /dev_data/zzj/hzy/datasets/UCR/MiddlePhalanxTW --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset MiddlePhalanxOutlineCorrect --data_dir /dev_data/zzj/hzy/datasets/UCR/MiddlePhalanxOutlineCorrect --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset GestureMidAirD1 --data_dir /dev_data/zzj/hzy/datasets/UCR/GestureMidAirD1 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset InsectEPGRegularTrain --data_dir /dev_data/zzj/hzy/datasets/UCR/InsectEPGRegularTrain --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset DodgerLoopDay --data_dir /dev_data/zzj/hzy/datasets/UCR/DodgerLoopDay --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ProximalPhalanxOutlineAgeGroup --data_dir /dev_data/zzj/hzy/datasets/UCR/ProximalPhalanxOutlineAgeGroup --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset HandOutlines --data_dir /dev_data/zzj/hzy/datasets/UCR/HandOutlines --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset SwedishLeaf --data_dir /dev_data/zzj/hzy/datasets/UCR/SwedishLeaf --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset AllGestureWiimoteZ --data_dir /dev_data/zzj/hzy/datasets/UCR/AllGestureWiimoteZ --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset InsectWingbeatSound --data_dir /dev_data/zzj/hzy/datasets/UCR/InsectWingbeatSound --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset MiddlePhalanxOutlineAgeGroup --data_dir /dev_data/zzj/hzy/datasets/UCR/MiddlePhalanxOutlineAgeGroup --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset GestureMidAirD3 --data_dir /dev_data/zzj/hzy/datasets/UCR/GestureMidAirD3 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ChlorineConcentration --data_dir /dev_data/zzj/hzy/datasets/UCR/ChlorineConcentration --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ArrowHead --data_dir /dev_data/zzj/hzy/datasets/UCR/ArrowHead --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Fungi --data_dir /dev_data/zzj/hzy/datasets/UCR/Fungi --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset PigAirwayPressure --data_dir /dev_data/zzj/hzy/datasets/UCR/PigAirwayPressure --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset PickupGestureWiimoteZ --data_dir /dev_data/zzj/hzy/datasets/UCR/PickupGestureWiimoteZ --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Rock --data_dir /dev_data/zzj/hzy/datasets/UCR/Rock --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Worms --data_dir /dev_data/zzj/hzy/datasets/UCR/Worms --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Lightning7 --data_dir /dev_data/zzj/hzy/datasets/UCR/Lightning7 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset BME --data_dir /dev_data/zzj/hzy/datasets/UCR/BME --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset SyntheticControl --data_dir /dev_data/zzj/hzy/datasets/UCR/SyntheticControl --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset MoteStrain --data_dir /dev_data/zzj/hzy/datasets/UCR/MoteStrain --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset SemgHandMovementCh2 --data_dir /dev_data/zzj/hzy/datasets/UCR/SemgHandMovementCh2 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Mallat --data_dir /dev_data/zzj/hzy/datasets/UCR/Mallat --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset GestureMidAirD2 --data_dir /dev_data/zzj/hzy/datasets/UCR/GestureMidAirD2 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset CricketY --data_dir /dev_data/zzj/hzy/datasets/UCR/CricketY --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset NonInvasiveFetalECGThorax1 --data_dir /dev_data/zzj/hzy/datasets/UCR/NonInvasiveFetalECGThorax1 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ToeSegmentation2 --data_dir /dev_data/zzj/hzy/datasets/UCR/ToeSegmentation2 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset ECG5000 --data_dir /dev_data/zzj/hzy/datasets/UCR/ECG5000 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Trace --data_dir /dev_data/zzj/hzy/datasets/UCR/Trace --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset WormsTwoClass --data_dir /dev_data/zzj/hzy/datasets/UCR/WormsTwoClass --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset GunPoint --data_dir /dev_data/zzj/hzy/datasets/UCR/GunPoint --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset UWaveGestureLibraryAll --data_dir /dev_data/zzj/hzy/datasets/UCR/UWaveGestureLibraryAll --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset EthanolLevel --data_dir /dev_data/zzj/hzy/datasets/UCR/EthanolLevel --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset WordSynonyms --data_dir /dev_data/zzj/hzy/datasets/UCR/WordSynonyms --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset HouseTwenty --data_dir /dev_data/zzj/hzy/datasets/UCR/HouseTwenty --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset DodgerLoopWeekend --data_dir /dev_data/zzj/hzy/datasets/UCR/DodgerLoopWeekend --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset Earthquakes --data_dir /dev_data/zzj/hzy/datasets/UCR/Earthquakes --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset TwoPatterns --data_dir /dev_data/zzj/hzy/datasets/UCR/TwoPatterns --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset DistalPhalanxOutlineCorrect --data_dir /dev_data/zzj/hzy/datasets/UCR/DistalPhalanxOutlineCorrect --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset SemgHandSubjectCh2 --data_dir /dev_data/zzj/hzy/datasets/UCR/SemgHandSubjectCh2 --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset FiftyWords --data_dir /dev_data/zzj/hzy/datasets/UCR/FiftyWords --batch_size 128 --task classification --gpu 0 --epochs 1000
python src/main.py --dataset BirdChicken --data_dir /dev_data/zzj/hzy/datasets/UCR/BirdChicken --batch_size 128 --task classification --gpu 0 --epochs 1000
================================================
FILE: ts_classification_methods/tst_cls/scripts/pretrain_finetune.sh
================================================
python src/main.py --dataset AllGestureWiimoteY --data_dir /dev_data/zzj/hzy/datasets/UCR/AllGestureWiimoteY --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset UWaveGestureLibraryX --data_dir /dev_data/zzj/hzy/datasets/UCR/UWaveGestureLibraryX --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset DiatomSizeReduction --data_dir /dev_data/zzj/hzy/datasets/UCR/DiatomSizeReduction --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset FreezerSmallTrain --data_dir /dev_data/zzj/hzy/datasets/UCR/FreezerSmallTrain --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ScreenType --data_dir /dev_data/zzj/hzy/datasets/UCR/ScreenType --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset MixedShapesSmallTrain --data_dir /dev_data/zzj/hzy/datasets/UCR/MixedShapesSmallTrain --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset SonyAIBORobotSurface2 --data_dir /dev_data/zzj/hzy/datasets/UCR/SonyAIBORobotSurface2 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset LargeKitchenAppliances --data_dir /dev_data/zzj/hzy/datasets/UCR/LargeKitchenAppliances --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ProximalPhalanxOutlineCorrect --data_dir /dev_data/zzj/hzy/datasets/UCR/ProximalPhalanxOutlineCorrect --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset OSULeaf --data_dir /dev_data/zzj/hzy/datasets/UCR/OSULeaf --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset OliveOil --data_dir /dev_data/zzj/hzy/datasets/UCR/OliveOil --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset FreezerRegularTrain --data_dir /dev_data/zzj/hzy/datasets/UCR/FreezerRegularTrain --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Herring --data_dir /dev_data/zzj/hzy/datasets/UCR/Herring --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset GesturePebbleZ1 --data_dir /dev_data/zzj/hzy/datasets/UCR/GesturePebbleZ1 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset MelbournePedestrian --data_dir /dev_data/zzj/hzy/datasets/UCR/MelbournePedestrian --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset PhalangesOutlinesCorrect --data_dir /dev_data/zzj/hzy/datasets/UCR/PhalangesOutlinesCorrect --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset CricketZ --data_dir /dev_data/zzj/hzy/datasets/UCR/CricketZ --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ACSF1 --data_dir /dev_data/zzj/hzy/datasets/UCR/ACSF1 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset FaceFour --data_dir /dev_data/zzj/hzy/datasets/UCR/FaceFour --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset SemgHandGenderCh2 --data_dir /dev_data/zzj/hzy/datasets/UCR/SemgHandGenderCh2 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Haptics --data_dir /dev_data/zzj/hzy/datasets/UCR/Haptics --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset UWaveGestureLibraryY --data_dir /dev_data/zzj/hzy/datasets/UCR/UWaveGestureLibraryY --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Coffee --data_dir /dev_data/zzj/hzy/datasets/UCR/Coffee --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset TwoLeadECG --data_dir /dev_data/zzj/hzy/datasets/UCR/TwoLeadECG --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset DistalPhalanxOutlineAgeGroup --data_dir /dev_data/zzj/hzy/datasets/UCR/DistalPhalanxOutlineAgeGroup --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset MixedShapesRegularTrain --data_dir /dev_data/zzj/hzy/datasets/UCR/MixedShapesRegularTrain --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset SmoothSubspace --data_dir /dev_data/zzj/hzy/datasets/UCR/SmoothSubspace --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Meat --data_dir /dev_data/zzj/hzy/datasets/UCR/Meat --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ShapesAll --data_dir /dev_data/zzj/hzy/datasets/UCR/ShapesAll --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset InsectEPGSmallTrain --data_dir /dev_data/zzj/hzy/datasets/UCR/InsectEPGSmallTrain --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset CinCECGTorso --data_dir /dev_data/zzj/hzy/datasets/UCR/CinCECGTorso --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset BeetleFly --data_dir /dev_data/zzj/hzy/datasets/UCR/BeetleFly --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Ham --data_dir /dev_data/zzj/hzy/datasets/UCR/Ham --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ProximalPhalanxTW --data_dir /dev_data/zzj/hzy/datasets/UCR/ProximalPhalanxTW --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ItalyPowerDemand --data_dir /dev_data/zzj/hzy/datasets/UCR/ItalyPowerDemand --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset GunPointMaleVersusFemale --data_dir /dev_data/zzj/hzy/datasets/UCR/GunPointMaleVersusFemale --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset SonyAIBORobotSurface1 --data_dir /dev_data/zzj/hzy/datasets/UCR/SonyAIBORobotSurface1 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset MedicalImages --data_dir /dev_data/zzj/hzy/datasets/UCR/MedicalImages --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset SmallKitchenAppliances --data_dir /dev_data/zzj/hzy/datasets/UCR/SmallKitchenAppliances --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset PigCVP --data_dir /dev_data/zzj/hzy/datasets/UCR/PigCVP --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Crop --data_dir /dev_data/zzj/hzy/datasets/UCR/Crop --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Chinatown --data_dir /dev_data/zzj/hzy/datasets/UCR/Chinatown --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset PLAID --data_dir /dev_data/zzj/hzy/datasets/UCR/PLAID --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset RefrigerationDevices --data_dir /dev_data/zzj/hzy/datasets/UCR/RefrigerationDevices --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Wine --data_dir /dev_data/zzj/hzy/datasets/UCR/Wine --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Yoga --data_dir /dev_data/zzj/hzy/datasets/UCR/Yoga --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset AllGestureWiimoteX --data_dir /dev_data/zzj/hzy/datasets/UCR/AllGestureWiimoteX --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset DistalPhalanxTW --data_dir /dev_data/zzj/hzy/datasets/UCR/DistalPhalanxTW --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Computers --data_dir /dev_data/zzj/hzy/datasets/UCR/Computers --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ElectricDevices --data_dir /dev_data/zzj/hzy/datasets/UCR/ElectricDevices --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Adiac --data_dir /dev_data/zzj/hzy/datasets/UCR/Adiac --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset InlineSkate --data_dir /dev_data/zzj/hzy/datasets/UCR/InlineSkate --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset FacesUCR --data_dir /dev_data/zzj/hzy/datasets/UCR/FacesUCR --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ShapeletSim --data_dir /dev_data/zzj/hzy/datasets/UCR/ShapeletSim --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset GunPointAgeSpan --data_dir /dev_data/zzj/hzy/datasets/UCR/GunPointAgeSpan --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Phoneme --data_dir /dev_data/zzj/hzy/datasets/UCR/Phoneme --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset CricketX --data_dir /dev_data/zzj/hzy/datasets/UCR/CricketX --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Lightning2 --data_dir /dev_data/zzj/hzy/datasets/UCR/Lightning2 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Beef --data_dir /dev_data/zzj/hzy/datasets/UCR/Beef --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset PowerCons --data_dir /dev_data/zzj/hzy/datasets/UCR/PowerCons --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Plane --data_dir /dev_data/zzj/hzy/datasets/UCR/Plane --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset NonInvasiveFetalECGThorax2 --data_dir /dev_data/zzj/hzy/datasets/UCR/NonInvasiveFetalECGThorax2 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset UMD --data_dir /dev_data/zzj/hzy/datasets/UCR/UMD --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Wafer --data_dir /dev_data/zzj/hzy/datasets/UCR/Wafer --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ToeSegmentation1 --data_dir /dev_data/zzj/hzy/datasets/UCR/ToeSegmentation1 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Car --data_dir /dev_data/zzj/hzy/datasets/UCR/Car --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset UWaveGestureLibraryZ --data_dir /dev_data/zzj/hzy/datasets/UCR/UWaveGestureLibraryZ --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset EOGVerticalSignal --data_dir /dev_data/zzj/hzy/datasets/UCR/EOGVerticalSignal --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset CBF --data_dir /dev_data/zzj/hzy/datasets/UCR/CBF --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset EOGHorizontalSignal --data_dir /dev_data/zzj/hzy/datasets/UCR/EOGHorizontalSignal --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Strawberry --data_dir /dev_data/zzj/hzy/datasets/UCR/Strawberry --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset StarLightCurves --data_dir /dev_data/zzj/hzy/datasets/UCR/StarLightCurves --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset DodgerLoopGame --data_dir /dev_data/zzj/hzy/datasets/UCR/DodgerLoopGame --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset FordA --data_dir /dev_data/zzj/hzy/datasets/UCR/FordA --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Fish --data_dir /dev_data/zzj/hzy/datasets/UCR/Fish --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset PigArtPressure --data_dir /dev_data/zzj/hzy/datasets/UCR/PigArtPressure --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ShakeGestureWiimoteZ --data_dir /dev_data/zzj/hzy/datasets/UCR/ShakeGestureWiimoteZ --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ECGFiveDays --data_dir /dev_data/zzj/hzy/datasets/UCR/ECGFiveDays --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset GunPointOldVersusYoung --data_dir /dev_data/zzj/hzy/datasets/UCR/GunPointOldVersusYoung --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset GesturePebbleZ2 --data_dir /dev_data/zzj/hzy/datasets/UCR/GesturePebbleZ2 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ECG200 --data_dir /dev_data/zzj/hzy/datasets/UCR/ECG200 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Symbols --data_dir /dev_data/zzj/hzy/datasets/UCR/Symbols --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset FordB --data_dir /dev_data/zzj/hzy/datasets/UCR/FordB --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset FaceAll --data_dir /dev_data/zzj/hzy/datasets/UCR/FaceAll --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset MiddlePhalanxTW --data_dir /dev_data/zzj/hzy/datasets/UCR/MiddlePhalanxTW --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset MiddlePhalanxOutlineCorrect --data_dir /dev_data/zzj/hzy/datasets/UCR/MiddlePhalanxOutlineCorrect --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset GestureMidAirD1 --data_dir /dev_data/zzj/hzy/datasets/UCR/GestureMidAirD1 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset InsectEPGRegularTrain --data_dir /dev_data/zzj/hzy/datasets/UCR/InsectEPGRegularTrain --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset DodgerLoopDay --data_dir /dev_data/zzj/hzy/datasets/UCR/DodgerLoopDay --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ProximalPhalanxOutlineAgeGroup --data_dir /dev_data/zzj/hzy/datasets/UCR/ProximalPhalanxOutlineAgeGroup --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset HandOutlines --data_dir /dev_data/zzj/hzy/datasets/UCR/HandOutlines --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset SwedishLeaf --data_dir /dev_data/zzj/hzy/datasets/UCR/SwedishLeaf --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset AllGestureWiimoteZ --data_dir /dev_data/zzj/hzy/datasets/UCR/AllGestureWiimoteZ --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset InsectWingbeatSound --data_dir /dev_data/zzj/hzy/datasets/UCR/InsectWingbeatSound --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset MiddlePhalanxOutlineAgeGroup --data_dir /dev_data/zzj/hzy/datasets/UCR/MiddlePhalanxOutlineAgeGroup --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset GestureMidAirD3 --data_dir /dev_data/zzj/hzy/datasets/UCR/GestureMidAirD3 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ChlorineConcentration --data_dir /dev_data/zzj/hzy/datasets/UCR/ChlorineConcentration --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ArrowHead --data_dir /dev_data/zzj/hzy/datasets/UCR/ArrowHead --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Fungi --data_dir /dev_data/zzj/hzy/datasets/UCR/Fungi --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset PigAirwayPressure --data_dir /dev_data/zzj/hzy/datasets/UCR/PigAirwayPressure --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset PickupGestureWiimoteZ --data_dir /dev_data/zzj/hzy/datasets/UCR/PickupGestureWiimoteZ --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Rock --data_dir /dev_data/zzj/hzy/datasets/UCR/Rock --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Worms --data_dir /dev_data/zzj/hzy/datasets/UCR/Worms --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Lightning7 --data_dir /dev_data/zzj/hzy/datasets/UCR/Lightning7 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset BME --data_dir /dev_data/zzj/hzy/datasets/UCR/BME --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset SyntheticControl --data_dir /dev_data/zzj/hzy/datasets/UCR/SyntheticControl --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset MoteStrain --data_dir /dev_data/zzj/hzy/datasets/UCR/MoteStrain --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset SemgHandMovementCh2 --data_dir /dev_data/zzj/hzy/datasets/UCR/SemgHandMovementCh2 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Mallat --data_dir /dev_data/zzj/hzy/datasets/UCR/Mallat --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset GestureMidAirD2 --data_dir /dev_data/zzj/hzy/datasets/UCR/GestureMidAirD2 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset CricketY --data_dir /dev_data/zzj/hzy/datasets/UCR/CricketY --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset NonInvasiveFetalECGThorax1 --data_dir /dev_data/zzj/hzy/datasets/UCR/NonInvasiveFetalECGThorax1 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ToeSegmentation2 --data_dir /dev_data/zzj/hzy/datasets/UCR/ToeSegmentation2 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset ECG5000 --data_dir /dev_data/zzj/hzy/datasets/UCR/ECG5000 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Trace --data_dir /dev_data/zzj/hzy/datasets/UCR/Trace --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset WormsTwoClass --data_dir /dev_data/zzj/hzy/datasets/UCR/WormsTwoClass --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset GunPoint --data_dir /dev_data/zzj/hzy/datasets/UCR/GunPoint --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset UWaveGestureLibraryAll --data_dir /dev_data/zzj/hzy/datasets/UCR/UWaveGestureLibraryAll --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset EthanolLevel --data_dir /dev_data/zzj/hzy/datasets/UCR/EthanolLevel --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset WordSynonyms --data_dir /dev_data/zzj/hzy/datasets/UCR/WordSynonyms --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset HouseTwenty --data_dir /dev_data/zzj/hzy/datasets/UCR/HouseTwenty --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset DodgerLoopWeekend --data_dir /dev_data/zzj/hzy/datasets/UCR/DodgerLoopWeekend --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset Earthquakes --data_dir /dev_data/zzj/hzy/datasets/UCR/Earthquakes --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset TwoPatterns --data_dir /dev_data/zzj/hzy/datasets/UCR/TwoPatterns --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset DistalPhalanxOutlineCorrect --data_dir /dev_data/zzj/hzy/datasets/UCR/DistalPhalanxOutlineCorrect --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset SemgHandSubjectCh2 --data_dir /dev_data/zzj/hzy/datasets/UCR/SemgHandSubjectCh2 --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset FiftyWords --data_dir /dev_data/zzj/hzy/datasets/UCR/FiftyWords --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
python src/main.py --dataset BirdChicken --data_dir /dev_data/zzj/hzy/datasets/UCR/BirdChicken --batch_size 128 --task pretrain_and_finetune --gpu 0 --epochs 400
================================================
FILE: ts_classification_methods/tst_cls/src/__init__.py
================================================
================================================
FILE: ts_classification_methods/tst_cls/src/dataprepare.py
================================================
import enum
import pandas as pd
from sklearn import model_selection
import sklearn
from sklearn.metrics import normalized_mutual_info_score
from sklearn.model_selection import StratifiedShuffleSplit, StratifiedKFold
from sklearn import preprocessing
import numpy as np
import os
import argparse
import shutil
def load_data(dataroot, dataset):
train = pd.read_csv(os.path.join(dataroot, dataset,
dataset+'_TRAIN.tsv'), sep='\t', header=None)
train_x = train.iloc[:, 1:]
train_target = train.iloc[:, 0]
test = pd.read_csv(os.path.join(dataroot, dataset,
dataset+'_TEST.tsv'), sep='\t', header=None)
test_x = test.iloc[:, 1:]
test_target = test.iloc[:, 0]
sum_dataset = pd.concat([train_x, test_x]).to_numpy(np.float32)
#sum_dataset = sum_dataset.fillna(sum_dataset.mean()).to_np(dtype=np.float32)
sum_target = pd.concat([train_target, test_target]).to_numpy(np.float32)
# sum_target = sum_target.fillna(sum_target.mean()).to_np(dtype=np.float32)
num_classes = len(np.unique(sum_target))
#sum_target = transfer_labels(sum_target)
return sum_dataset, sum_target
def transfer_labels(labels):
indicies = np.unique(labels)
num_samples = labels.shape[0]
for i in range(num_samples):
new_label = np.argwhere(labels[i] == indicies)[0][0]
labels[i] = new_label
return labels
def k_fold(data, target):
skf = StratifiedKFold(5, shuffle=True)
#skf = StratifiedShuffleSplit(5)
train_sets = []
train_targets = []
val_sets = []
val_targets = []
test_sets = []
test_targets = []
for raw_index, test_index in skf.split(data, target):
raw_set = data[raw_index]
raw_target = target[raw_index]
test_sets.append(data[test_index])
test_targets.append(target[test_index])
train_index, val_index = next(StratifiedKFold(
4, shuffle=True).split(raw_set, raw_target))
# train_index, val_index = next(StratifiedShuffleSplit(1).split(raw_set, raw_target))
train_sets.append(raw_set[train_index])
train_targets.append(raw_target[train_index])
val_sets.append(raw_set[val_index])
val_targets.append(raw_target[val_index])
return np.array(train_sets), np.array(train_targets), np.array(val_sets), np.array(val_targets), np.array(test_sets), np.array(test_targets)
def normalize_per_series(data):
std_ = np.std(data, axis=1, keepdims=True)
std_[std_ == 0] = 1.0
return (data - np.mean(data, axis=1, keepdims=True)) / std_
def fill_nan_value(train_set, val_set, test_set):
ind = np.where(np.isnan(train_set))
col_mean = np.nanmean(train_set, axis=0)
col_mean[np.isnan(col_mean)] = 1e-6
train_set[ind] = np.take(col_mean, ind[1])
ind_val = np.where(np.isnan(val_set))
val_set[ind_val] = np.take(col_mean, ind_val[1])
ind_test = np.where(np.isnan(test_set))
test_set[ind_test] = np.take(col_mean, ind_test[1])
return train_set, val_set, test_set
# input: dataframe after .loc[indices]
# output: a dataframe, input of the dataset_class
def fill_nan_and_normalize(train_data, val_data, test_data, train_indices, val_indices, test_indices):
train_arr = np.array(train_data)
train_arr = np.reshape(train_arr, [len(train_indices), -1])
val_arr = np.array(val_data)
val_arr = np.reshape(val_arr, [len(val_indices), -1])
test_arr = np.array(test_data)
test_arr = np.reshape(test_arr, [len(test_indices), -1])
train_arr, val_arr, test_arr = fill_nan_value(train_arr, val_arr, test_arr)
train_arr = normalize_per_series(train_arr)
val_arr = normalize_per_series(val_arr)
test_arr = normalize_per_series(test_arr)
train_raw = pd.DataFrame(train_arr)
train_df = pd.DataFrame()
train_df['dim_0'] = [pd.Series(train_raw.iloc[x, :])
for x in range(len(train_raw))]
lengths = train_df.applymap(lambda x: len(x)).values
train_df = pd.concat((pd.DataFrame({col: train_df.loc[row, col] for col in train_df.columns}).reset_index(drop=True).set_index(
pd.Series(lengths[row, 0]*[row])) for row in range(train_df.shape[0])), axis=0)
train_df = train_df.groupby(train_df.index).transform(lambda x: x)
val_raw = pd.DataFrame(val_arr)
val_df = pd.DataFrame()
val_df['dim_0'] = [pd.Series(val_raw.iloc[x, :])
for x in range(len(val_raw))]
lengths = val_df.applymap(lambda x: len(x)).values
val_df = pd.concat((pd.DataFrame({col: val_df.loc[row, col] for col in val_df.columns}).reset_index(drop=True).set_index(
pd.Series(lengths[row, 0]*[row])) for row in range(val_df.shape[0])), axis=0)
val_df = val_df.groupby(val_df.index).transform(lambda x: x)
test_raw = pd.DataFrame(test_arr)
test_df = pd.DataFrame()
test_df['dim_0'] = [pd.Series(test_raw.iloc[x, :])
for x in range(len(test_raw))]
lengths = test_df.applymap(lambda x: len(x)).values
test_df = pd.concat((pd.DataFrame({col: test_df.loc[row, col] for col in test_df.columns}).reset_index(drop=True).set_index(
pd.Series(lengths[row, 0]*[row])) for row in range(test_df.shape[0])), axis=0)
test_df = test_df.groupby(test_df.index).transform(lambda x: x)
return train_df, val_df, test_df
if __name__ == '__main__':
'''
CACHE_PATH = './src/data_cache'
shutil.rmtree(CACHE_PATH)
os.mkdir(CACHE_PATH)
parser = argparse.ArgumentParser()
parser.add_argument('--dataroot', default='/dev_data/zzj/hzy/datasets/UCR', type=str)
parser.add_argument('--dataset', default='ArrowHead', type=str)
args = parser.parse_args()
sum_dataset, sum_target = load_data(args.dataroot, args.dataset)
print(sum_target)
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = k_fold(sum_dataset, sum_target)
for i, train_dataset in enumerate(train_datasets):
train_target = train_targets[i]
val_dataset = val_datasets[i]
val_target = val_targets[i]
test_dataset = test_datasets[i]
test_target = test_targets[i]
test1 = pd.DataFrame(train_datasets[0])
test2 = pd.DataFrame(train_targets[0])
out = pd.concat([test2, test1], axis=1, ignore_index=True)
out.to_csv(os.path.join(CACHE_PATH, 'temp.tsv'), sep='\t', index=False, header=False)
ds, target = sktime.utils.load_data.load_from_ucr_tsv_to_dataframe(os.path.join(CACHE_PATH, 'temp.tsv'), return_separate_X_and_y=True)
print(ds)
print(target)
'''
sklearn.random.seed(42)
sum_dataset, sum_target = load_data(
'/dev_data/zzj/hzy/datasets/UCR', 'ArrowHead')
skf = model_selection.StratifiedKFold(5, shuffle=True, random_state=42)
for x, y in skf.split(sum_dataset, sum_target):
print(x, y)
================================================
FILE: ts_classification_methods/tst_cls/src/datasets/__init__.py
================================================
================================================
FILE: ts_classification_methods/tst_cls/src/datasets/data.py
================================================
from typing import Optional
import os
from multiprocessing import Pool, cpu_count
import glob
import re
import logging
from itertools import repeat, chain
import numpy as np
import pandas as pd
from tqdm import tqdm
from sktime.utils import load_data
from datasets import utils
logger = logging.getLogger('__main__')
class Normalizer(object):
"""
Normalizes dataframe across ALL contained rows (time steps). Different from per-sample normalization.
"""
def __init__(self, norm_type, mean=None, std=None, min_val=None, max_val=None):
"""
Args:
norm_type: choose from:
"standardization", "minmax": normalizes dataframe across ALL contained rows (time steps)
"per_sample_std", "per_sample_minmax": normalizes each sample separately (i.e. across only its own rows)
mean, std, min_val, max_val: optional (num_feat,) Series of pre-computed values
"""
self.norm_type = norm_type
self.mean = mean
self.std = std
self.min_val = min_val
self.max_val = max_val
def normalize(self, df):
"""
Args:
df: input dataframe
Returns:
df: normalized dataframe
"""
if self.norm_type == "standardization":
if self.mean is None:
self.mean = df.mean()
self.std = df.std()
return (df - self.mean) / (self.std + np.finfo(float).eps)
elif self.norm_type == "minmax":
if self.max_val is None:
self.max_val = df.max()
self.min_val = df.min()
return (df - self.min_val) / (self.max_val - self.min_val + np.finfo(float).eps)
elif self.norm_type == "per_sample_std":
grouped = df.groupby(by=df.index)
return (df - grouped.transform('mean')) / grouped.transform('std')
elif self.norm_type == "per_sample_minmax":
grouped = df.groupby(by=df.index)
min_vals = grouped.transform('min')
return (df - min_vals) / (grouped.transform('max') - min_vals + np.finfo(float).eps)
else:
raise (
NameError(f'Normalize method "{self.norm_type}" not implemented'))
def interpolate_missing(y):
"""
Replaces NaN values in pd.Series `y` using linear interpolation
"""
if y.isna().any():
y = y.interpolate(method='linear', limit_direction='both')
return y
def subsample(y, limit=256, factor=2):
"""
If a given Series is longer than `limit`, returns subsampled sequence by the specified integer factor
"""
if len(y) > limit:
return y[::factor].reset_index(drop=True)
return y
class BaseData(object):
def set_num_processes(self, n_proc):
if (n_proc is None) or (n_proc <= 0):
self.n_proc = cpu_count() # max(1, cpu_count() - 1)
else:
self.n_proc = min(n_proc, cpu_count())
class HDD_data(BaseData):
"""
Dataset class for Hard Drive Disk failure dataset # TODO: INCOMPLETE: does not follow other datasets format
Attributes:
all_df: (num_samples * seq_len, num_columns) dataframe indexed by integer indices, with multiple rows corresponding to the same index (sample).
Each row is a time step; Each column contains either metadata (e.g. timestamp) or a feature.
all_IDs: (num_samples,) series of IDs contained in `all_df`/`feature_df` (same as all_df.index.unique() )
"""
def __init__(self, root_dir, file_list=None, pattern=None, n_proc=1, limit_size=None, config=None):
self.set_num_processes(n_proc=n_proc)
self.all_df = self.load_all(root_dir)
# Sort by serial number and date and index by serial number
self.all_df = self.all_df.sort_values(by=['serial_number', 'date'])
self.all_df = self.all_df.set_index('serial_number')
# all asset(disk) IDs (serial numbers)
self.all_IDs = self.all_df.index.unique()
self.failed_IDs = self.all_df[
self.all_df.failure == 1].index.unique() # IDs corresponding to assets which failed
self.normal_IDs = sorted(
list(set(self.all_IDs) - set(self.failed_IDs))) # IDs corresponding to assets without failure
def load_all(self, dir_path):
"""
Loads datasets from all csv files contained in `dir_path` into a dataframe
Args:
dir_path: directory containing all individual .csv files. Corresponds to a Quarter
Returns:
"""
# each file name corresponds to another date
input_paths = [os.path.join(dir_path, f) for f in os.listdir(dir_path)
if os.path.isfile(os.path.join(dir_path, f)) and f.endswith('.csv')]
if self.n_proc > 1:
# Load in parallel
# no more than file_names needed here
_n_proc = min(self.n_proc, len(input_paths))
logger.info("Loading {} datasets files using {} parallel processes ...".format(
len(input_paths), _n_proc))
with Pool(processes=_n_proc) as pool:
all_df = pd.concat(pool.map(HDD_data.load_single, input_paths))
else: # read 1 file at a time
all_df = pd.concat(HDD_data.load_single(path)
for path in input_paths)
return all_df
@staticmethod
def load_single(filepath):
df = HDD_data.read_data(filepath)
df = HDD_data.select_columns(df)
return df
@staticmethod
def read_data(filepath):
"""Reads a single .csv, which typically contains a day of datasets of various disks.
Only Seagate disks are retained."""
df = pd.read_csv(filepath)
# only Seagate models, starting with 'ST', are used
return df[df['model'].apply(lambda x: x.startswith('ST'))]
@staticmethod
def select_columns(df):
"""Smart9 is the drive's age in hours"""
df = df.dropna(
axis='columns', how='all') # drop columns containing only NaN
keep_cols = [col for col in df.columns if 'normalized' not in col]
df = df[keep_cols]
return df
@staticmethod
def process_columns(df):
df['date'] = pd.to_datetime(df['date'])
df['failure'] = df['failure'].astype(bool)
df[['capacity_bytes', 'model']] = df[[
'capacity_bytes', 'model']].astype('category')
return df
class WeldData(BaseData):
"""
Dataset class for welding dataset.
Attributes:
all_df: dataframe indexed by ID, with multiple rows corresponding to the same index (sample).
Each row is a time step; Each column contains either metadata (e.g. timestamp) or a feature.
feature_df: contains the subset of columns of `all_df` which correspond to selected features
feature_names: names of columns contained in `feature_df` (same as feature_df.columns)
all_IDs: IDs contained in `all_df`/`feature_df` (same as all_df.index.unique() )
max_seq_len: maximum sequence (time series) length. If None, script argument `max_seq_len` will be used.
(Moreover, script argument overrides this attribute)
"""
def __init__(self, root_dir, file_list=None, pattern=None, n_proc=1, limit_size=None, config=None):
self.set_num_processes(n_proc=n_proc)
self.all_df = self.load_all(
root_dir, file_list=file_list, pattern=pattern)
self.all_df = self.all_df.sort_values(
by=['weld_record_index']) # datasets is presorted
# TODO: There is a single ID that causes the model output to become nan - not clear why
# exclude particular ID
self.all_df = self.all_df[self.all_df['weld_record_index'] != 920397]
self.all_df = self.all_df.set_index('weld_record_index')
self.all_IDs = self.all_df.index.unique() # all sample (session) IDs
self.max_seq_len = 66
if limit_size is not None:
if limit_size > 1:
limit_size = int(limit_size)
else: # interpret as proportion if in (0, 1]
limit_size = int(limit_size * len(self.all_IDs))
self.all_IDs = self.all_IDs[:limit_size]
self.all_df = self.all_df.loc[self.all_IDs]
self.feature_names = ['wire_feed_speed',
'current', 'voltage', 'motor_current', 'power']
self.feature_df = self.all_df[self.feature_names]
def load_all(self, root_dir, file_list=None, pattern=None):
"""
Loads datasets from csv files contained in `root_dir` into a dataframe, optionally choosing from `pattern`
Args:
root_dir: directory containing all individual .csv files
file_list: optionally, provide a list of file paths within `root_dir` to consider.
Otherwise, entire `root_dir` contents will be used.
pattern: optionally, apply regex string to select subset of files
Returns:
all_df: a single (possibly concatenated) dataframe with all data corresponding to specified files
"""
# each file name corresponds to another date. Also tools (A, B) and others.
# Select paths for training and evaluation
if file_list is None:
data_paths = glob.glob(os.path.join(
root_dir, '*')) # list of all paths
else:
data_paths = [os.path.join(root_dir, p) for p in file_list]
if len(data_paths) == 0:
raise Exception('No files found using: {}'.format(
os.path.join(root_dir, '*')))
if pattern is None:
# by default evaluate on
selected_paths = data_paths
else:
selected_paths = list(
filter(lambda x: re.search(pattern, x), data_paths))
input_paths = [p for p in selected_paths if os.path.isfile(
p) and p.endswith('.csv')]
if len(input_paths) == 0:
raise Exception(
"No .csv files found using pattern: '{}'".format(pattern))
if self.n_proc > 1:
# Load in parallel
# no more than file_names needed here
_n_proc = min(self.n_proc, len(input_paths))
logger.info("Loading {} datasets files using {} parallel processes ...".format(
len(input_paths), _n_proc))
with Pool(processes=_n_proc) as pool:
all_df = pd.concat(pool.map(WeldData.load_single, input_paths))
else: # read 1 file at a time
all_df = pd.concat(WeldData.load_single(path)
for path in input_paths)
return all_df
@staticmethod
def load_single(filepath):
df = WeldData.read_data(filepath)
df = WeldData.select_columns(df)
num_nan = df.isna().sum().sum()
if num_nan > 0:
logger.warning(
"{} nan values in {} will be replaced by 0".format(num_nan, filepath))
df = df.fillna(0)
return df
@staticmethod
def read_data(filepath):
"""Reads a single .csv, which typically contains a day of datasets of various weld sessions.
"""
df = pd.read_csv(filepath)
return df
@staticmethod
def select_columns(df):
""""""
df = df.rename(columns={"per_energy": "power"})
# Sometimes 'diff_time' is not measured correctly (is 0), and power ('per_energy') becomes infinite
is_error = df['power'] > 1e16
df.loc[is_error, 'power'] = df.loc[is_error,
'true_energy'] / df['diff_time'].median()
df['weld_record_index'] = df['weld_record_index'].astype(int)
keep_cols = ['weld_record_index', 'wire_feed_speed',
'current', 'voltage', 'motor_current', 'power']
df = df[keep_cols]
return df
class TSRegressionArchive(BaseData):
"""
Dataset class for datasets included in:
1) the Time Series Regression Archive (www.timeseriesregression.org), or
2) the Time Series Classification Archive (www.timeseriesclassification.com)
Attributes:
all_df: (num_samples * seq_len, num_columns) dataframe indexed by integer indices, with multiple rows corresponding to the same index (sample).
Each row is a time step; Each column contains either metadata (e.g. timestamp) or a feature.
feature_df: (num_samples * seq_len, feat_dim) dataframe; contains the subset of columns of `all_df` which correspond to selected features
feature_names: names of columns contained in `feature_df` (same as feature_df.columns)
all_IDs: (num_samples,) series of IDs contained in `all_df`/`feature_df` (same as all_df.index.unique() )
labels_df: (num_samples, num_labels) pd.DataFrame of label(s) for each sample
max_seq_len: maximum sequence (time series) length. If None, script argument `max_seq_len` will be used.
(Moreover, script argument overrides this attribute)
"""
def __init__(self, root_dir, file_list=None, pattern=None, n_proc=1, limit_size=None, config=None):
# self.set_num_processes(n_proc=n_proc)
self.config = config
self.all_df, self.labels_df = self.load_all(
root_dir, file_list=file_list, pattern=pattern)
# all sample IDs (integer indices 0 ... num_samples-1)
self.all_IDs = self.all_df.index.unique()
if limit_size is not None:
if limit_size > 1:
limit_size = int(limit_size)
else: # interpret as proportion if in (0, 1]
limit_size = int(limit_size * len(self.all_IDs))
self.all_IDs = self.all_IDs[:limit_size]
self.all_df = self.all_df.loc[self.all_IDs]
# use all features
self.feature_names = self.all_df.columns
self.feature_df = self.all_df
def load_all(self, root_dir, file_list=None, pattern=None):
"""
Loads datasets from csv files contained in `root_dir` into a dataframe, optionally choosing from `pattern`
Args:
root_dir: directory containing all individual .csv files
file_list: optionally, provide a list of file paths within `root_dir` to consider.
Otherwise, entire `root_dir` contents will be used.
pattern: optionally, apply regex string to select subset of files
Returns:
all_df: a single (possibly concatenated) dataframe with all data corresponding to specified files
labels_df: dataframe containing label(s) for each sample
"""
# Select paths for training and evaluation
if file_list is None:
data_paths = glob.glob(os.path.join(
root_dir, '*')) # list of all paths
else:
data_paths = [os.path.join(root_dir, p) for p in file_list]
if len(data_paths) == 0:
raise Exception('No files found using: {}'.format(
os.path.join(root_dir, '*')))
if pattern is None:
# by default evaluate on
selected_paths = data_paths
else:
selected_paths = list(
filter(lambda x: re.search(pattern, x), data_paths))
input_paths = [p for p in selected_paths if os.path.isfile(
p) and p.endswith('.tsv')]
if len(input_paths) == 0:
raise Exception(
"No .tsv files found using pattern: '{}'".format(pattern))
df1, label1 = self.load_single(input_paths[0])
df2, label2 = self.load_single(input_paths[1])
all_df = pd.concat([df1, df2], ignore_index=True)
labels_df = pd.concat([label1, label2], ignore_index=True)
# all_df, labels_df = self.load_single(input_paths[0]) # a single file contains dataset
lengths = all_df.applymap(lambda x: len(x)).values
all_df = pd.concat((pd.DataFrame({col: all_df.loc[row, col] for col in all_df.columns}).reset_index(drop=True).set_index(
pd.Series(lengths[row, 0]*[row])) for row in range(all_df.shape[0])), axis=0)
# Replace NaN values
grp = all_df.groupby(by=all_df.index)
all_df = grp.transform(interpolate_missing)
# all_df.to_csv('/dev_data/zzj/hzy/pretrained_model/results_on_ucr/all_df.csv')
#
# (all_df.loc[0])
return all_df, labels_df
def load_single(self, filepath):
# Every row of the returned df corresponds to a sample;
# every column is a pd.Series indexed by timestamp and corresponds to a different dimension (feature)
if self.config['task'] == 'regression':
df, labels = utils.load_from_tsfile_to_dataframe(
filepath, return_separate_X_and_y=True, replace_missing_vals_with='NaN')
labels_df = pd.DataFrame(labels, dtype=np.float32)
elif self.config['task'] == 'classification':
# TODO UCR ver.
df, labels = load_data.load_from_ucr_tsv_to_dataframe(
filepath, return_separate_X_and_y=True)
#df, labels = load_data.load_from_tsfile_to_dataframe(filepath, return_separate_X_and_y=True, replace_missing_vals_with='NaN')
labels = pd.Series(labels, dtype="category")
self.class_names = labels.cat.categories
# int8-32 gives an error when using nn.CrossEntropyLoss
labels_df = pd.DataFrame(labels.cat.codes, dtype=np.int8)
else: # e.g. imputation, TODO, use uce_tsv
try:
df, labels = load_data.load_from_ucr_tsv_to_dataframe(filepath, return_separate_X_and_y=True
)
except:
df, labels = load_data.load_from_ucr_tsv_to_dataframe(
filepath, return_separate_X_and_y=True, replace_missing_vals_with='NaN')
labels = pd.Series(labels, dtype="category")
labels_df = pd.DataFrame(labels.cat.codes, dtype=np.int8)
# (num_samples, num_dimensions) array containing the length of each series
lengths = df.applymap(lambda x: len(x)).values
horiz_diffs = np.abs(lengths - np.expand_dims(lengths[:, 0], -1))
# most general check: len(np.unique(lengths.values)) > 1: # returns array of unique lengths of sequences
if np.sum(horiz_diffs) > 0: # if any row (sample) has varying length across dimensions
logger.warning(
"Not all time series dimensions have same length - will attempt to fix by subsampling first dimension...")
# TODO: this addresses a very specific case (PPGDalia)
df = df.applymap(subsample)
if self.config['subsample_factor']:
df = df.applymap(lambda x: subsample(
x, limit=0, factor=self.config['subsample_factor']))
lengths = df.applymap(lambda x: len(x)).values
vert_diffs = np.abs(lengths - np.expand_dims(lengths[0, :], 0))
if np.sum(vert_diffs) > 0: # if any column (dimension) has varying length across samples
self.max_seq_len = int(np.max(lengths[:, 0]))
logger.warning("Not all samples have same length: maximum length set to {}".format(
self.max_seq_len))
else:
self.max_seq_len = lengths[0, 0]
# First create a (seq_len, feat_dim) dataframe for each sample, indexed by a single integer ("ID" of the sample)
# Then concatenate into a (num_samples * seq_len, feat_dim) dataframe, with multiple rows corresponding to the
# sample index (i.e. the same scheme as all datasets in this project)
'''
df = pd.concat((pd.DataFrame({col: df.loc[row, col] for col in df.columns}).reset_index(drop=True).set_index(
pd.Series(lengths[row, 0]*[row])) for row in range(df.shape[0])), axis=0)
# Replace NaN values
grp = df.groupby(by=df.index)
df = grp.transform(interpolate_missing)
'''
return df, labels_df
class SemicondTraceData(BaseData):
"""
Dataset class for semiconductor manufacturing sensor trace data.
Attributes:
all_df: (num_samples * seq_len, num_columns) dataframe indexed by integer indices, with multiple rows corresponding to the same index (sample).
Each row is a time step; Each column contains either metadata (e.g. timestamp) or a feature.
feature_df: (num_samples * seq_len, feat_dim) dataframe; contains the subset of columns of `all_df` which correspond to selected features
feature_names: names of columns contained in `feature_df` (same as feature_df.columns)
all_IDs: (num_samples,) series of IDs contained in `all_df`/`feature_df` (same as all_df.index.unique() )
labels_df: (num_samples, num_labels) pd.DataFrame of label(s) for each sample
max_seq_len: maximum sequence (time series) length. If None, script argument `max_seq_len` will be used.
(Moreover, script argument overrides this attribute)
"""
# TODO: currently all *numeric* features which are not *dataset-wise* constant are kept. Sample-wise constants are included
features = ['Actual Bias voltage (AT/CH2/RFGen/RFMatch.rMatchBias)', 'Actual Pressure (AT/CH2/PressCtrl.rPress)',
'Ampoule wafer count (AT/CH2/Gaspanel/Stick01/BUBBLER.cAmpouleWaferCount)',
'Ampoule wafer count (AT/CH2/Gaspanel/Stick05/BUBBLER.cAmpouleWaferCount)',
'Backside Flow Reading (AT/CH2/VacChuck.rBacksideFlow)',
'Backside Pressure Reading (AT/CH2/VacChuck.rBacksidePress)',
'Backside Pressure Setpoint (AT/CH2/VacChuck.wBacksidePressSP)',
'Bubbler ampoule accumulated flow (AT/CH2/Gaspanel/Stick01/BUBBLER.cAmpouleLifeAccFlow)',
'Bubbler ampoule accumulated flow (AT/CH2/Gaspanel/Stick05/BUBBLER.cAmpouleLifeAccFlow)',
'Current Flow (AT/CH2/Gaspanel/Stick01.rFlow)', 'Current Flow (AT/CH2/Gaspanel/Stick01/Mfc.rFlow)',
'Current Flow (AT/CH2/Gaspanel/Stick02.rFlow)', 'Current Flow (AT/CH2/Gaspanel/Stick02/Mfc.rFlow)',
'Current Flow (AT/CH2/Gaspanel/Stick03.rFlow)', 'Current Flow (AT/CH2/Gaspanel/Stick03/Mfc.rFlow)',
'Current Flow (AT/CH2/Gaspanel/Stick05.rFlow)', 'Current Flow (AT/CH2/Gaspanel/Stick05/Mfc.rFlow)',
'Current Flow (AT/CH2/Gaspanel/Stick06.rFlow)', 'Current Flow (AT/CH2/Gaspanel/Stick06/Mfc.rFlow)',
'Current Flow (AT/CH2/Gaspanel/Stick09.rFlow)', 'Current Flow (AT/CH2/Gaspanel/Stick09/Mfc.rFlow)',
'Current Flow (AT/CH2/Gaspanel/Stick21.rFlow)', 'Current Flow (AT/CH2/Gaspanel/Stick21/Mfc.rFlow)',
'Current Flow (AT/CH2/Gaspanel/Stick22.rFlow)', 'Current Flow (AT/CH2/Gaspanel/Stick22/Mfc.rFlow)',
'Current Position SP Percent (AT/CH2/PressCtrl.rPosSPP)', 'Current Power SP (AT/CH2/RFGen.rPowerSP)',
'Current Pressure in PSI (AT/CH2/Gaspanel/Stick01/Transducer.rPressure)',
'Current Pressure in PSI (AT/CH2/Gaspanel/Stick05/Transducer.rPressure)',
'Current Pressure in PSI (AT/CH2/Gaspanel/Stick08/Transducer.rPressure)',
'Current Pressure in PSI (AT/CH2/Gaspanel/Stick09/Transducer.rPressure)',
'Current Pressure in Torr (AT/CH2/Gaspanel/Stick01/Transducer.rPressureTorr)',
'Current Pressure in Torr (AT/CH2/Gaspanel/Stick05/Transducer.rPressureTorr)',
'Current Pressure in Torr (AT/CH2/Gaspanel/Stick08/Transducer.rPressureTorr)',
'Current Pressure in Torr (AT/CH2/Gaspanel/Stick09/Transducer.rPressureTorr)',
'Current Recipe Count (AT/CH2/Clean/Idle Purge.CurRcpCnt)',
'Current Recipe Count (AT/CH2/Clean/On Load Clean.CurRcpCnt)',
'Current recipe step number (AT/CH2.@RecipeStep01)',
'Current servo error (AT/CH2/TempCtrl/Heater.rOutputCurrServoError)',
'Cycle Count (AT/CH2/Gaspanel/Stick01/Service/Cycle Purge By Pressure.cnfCycleCount)',
'Cycle Count (AT/CH2/Gaspanel/Stick01/Service/Cycle Purge By Time.cnfCycleCount)',
'Cycle Count (AT/CH2/Gaspanel/Stick05/Service/Cycle Purge By Pressure.cnfCycleCount)',
'Cycle Count (AT/CH2/Gaspanel/Stick05/Service/Cycle Purge By Time.cnfCycleCount)',
'Cycle Count (AT/CH2/Gaspanel/Stick08/Service/Cycle Purge By Pressure.cnfCycleCount)',
'Default temperature setpoint (AT/CH2/Watlow1/Ch_1.cDefaultSetpoint)',
'Default temperature setpoint (AT/CH2/Watlow1/Ch_2.cDefaultSetpoint)',
'Default temperature setpoint (AT/CH2/Watlow1/Ch_6.cDefaultSetpoint)',
'Default temperature setpoint (AT/CH2/Watlow2/Ch_4.cDefaultSetpoint)',
'Default temperature setpoint (AT/CH2/Watlow2/Ch_5.cDefaultSetpoint)',
'Estimated Ampoule wafer count (AT/CH2/Gaspanel/Stick05/BUBBLER.cEstAmpouleWaferCount)',
'Expected Lid Heater Temperature (AT/CH2/Rcp.wHdrLidHtrTemp)',
'Final Leak Check pressure (AT/CH2/Services/CVDLeakCheck/LeakCheck.rLeakCheckFinalPressure)',
'Final Leak Rate (AT/CH2/Services/CVDLeakCheck/LeakCheck.rFinalLeakRate)',
'Flow Setpoint (AT/CH2/Gaspanel/Stick02/Mfc.wSetpoint)',
'Flow Setpoint (AT/CH2/Gaspanel/Stick03/Mfc.wSetpoint)',
'Flow Setpoint (AT/CH2/Gaspanel/Stick05/Mfc.wSetpoint)',
'Flow Setpoint (AT/CH2/Gaspanel/Stick06/Mfc.wSetpoint)',
'Flow Setpoint (AT/CH2/Gaspanel/Stick09/Mfc.wSetpoint)',
'Flow Setpoint (AT/CH2/Gaspanel/Stick21/Mfc.wSetpoint)',
'Flow Setpoint (AT/CH2/Gaspanel/Stick22/Mfc.wSetpoint)',
'Next wafer slot, side 1 (AT/CH2.@NextCassSlot01_01)',
'Next wafer src, side 1 (AT/CH2.@NextCassId01_01)', 'Temp Reading (AT/CH2/Watlow1/Ch_1.rTempReading)',
'Temp Reading (AT/CH2/Watlow1/Ch_2.rTempReading)', 'Temp Reading (AT/CH2/Watlow1/Ch_3.rTempReading)',
'Temp Reading (AT/CH2/Watlow1/Ch_4.rTempReading)', 'Temp Reading (AT/CH2/Watlow1/Ch_5.rTempReading)',
'Temp Reading (AT/CH2/Watlow1/Ch_6.rTempReading)', 'Temp Reading (AT/CH2/Watlow1/Ch_7.rTempReading)',
'Temp Reading (AT/CH2/Watlow1/Ch_8.rTempReading)', 'Temp Reading (AT/CH2/Watlow2/Ch_1.rTempReading)',
'Temp Reading (AT/CH2/Watlow2/Ch_2.rTempReading)', 'Temp Reading (AT/CH2/Watlow2/Ch_3.rTempReading)',
'Temp Reading (AT/CH2/Watlow2/Ch_4.rTempReading)', 'Temp Reading (AT/CH2/Watlow2/Ch_5.rTempReading)',
'Temp Reading (AT/CH2/Watlow2/Ch_6.rTempReading)', 'Temp Reading (AT/CH2/Watlow2/Ch_8.rTempReading)',
'Temp Reading (AT/CH2/Watlow3/Ch_1.rTempReading)', 'Temp Reading (AT/CH2/Watlow3/Ch_2.rTempReading)',
'Temp Reading (AT/CH2/Watlow3/Ch_3.rTempReading)', 'Temp Reading (AT/CH2/Watlow3/Ch_4.rTempReading)',
'Temp Reading (AT/CH2/Watlow3/Ch_5.rTempReading)', 'Temp Reading (AT/CH2/Watlow3/Ch_6.rTempReading)',
'Temp Reading (AT/CH2/Watlow3/Ch_7.rTempReading)']
def __init__(self, root_dir, file_list=None, pattern=None, n_proc=8, limit_size=None, config=None):
self.set_num_processes(n_proc=n_proc)
# Get labels
wafer_measurements_path = os.path.join(root_dir, "waferdata/")
logger.info("Getting wafer measurements ...")
# Dataframe which holds all measurements: mean thickness (and deposition rate for the subset "type 1"), roughness (std of thickness)
measurements_df = self.get_measurements(wafer_measurements_path)
# Get metadata (e.g. mapping between measurement file and trace file)
catalog_path = os.path.join(root_dir, "CTF03.catalog.20200629.csv")
logger.info("Getting wafer metadata ...")
# This dataframe holds for all wafers all metadata per wafer (including measurements, when they exist, and corresponding trace file)
metadata_df = self.get_metadata(catalog_path, measurements_df)
# TODO: select subset here (e.g. here 20A wafers selected), or set file_list=None to use all
files_20A = metadata_df.loc[metadata_df['ChamberRecipeID']
== 'QUALCH2CO20', 'TraceDataFile']
IDs_20A = files_20A.index
files_20A = list(map(self.convert_tracefilename, files_20A))
# Get trace files
tracedata_dir = os.path.join(
root_dir, "tracedata/CTF03_CH2_QUALCH2CO_CH2_G0009")
logger.info("Getting sensor trace data ...")
self.all_df = self.load_all(
tracedata_dir, file_list=files_20A, pattern=pattern)
self.all_IDs = self.all_df.index.unique() # all sample (session) IDs
# TODO: select prediction objective here: any of ['Mean_dep_rate', 'std_thickness', 'mean_thickness']
if config['task'] == 'regression':
labels_col = config['labels'] if config['labels'] else 'Mean_dep_rate'
self.labels_df = pd.DataFrame(
metadata_df.loc[self.all_IDs, labels_col], dtype=np.float32)
self.labels_df = self.labels_df[~self.labels_df[labels_col].isna()]
self.all_IDs = self.labels_df.index
self.all_df = self.all_df.loc[self.all_IDs]
self.max_seq_len = 130 # TODO: for 20A
if (limit_size is not None) and (limit_size < len(self.all_IDs)):
if limit_size > 1:
limit_size = int(limit_size)
else: # interpret as proportion if in (0, 1]
limit_size = int(limit_size * len(self.all_IDs))
self.all_IDs = self.all_IDs[:limit_size]
self.all_df = self.all_df.loc[self.all_IDs]
self.feature_names = SemicondTraceData.features
self.feature_df = self.all_df[self.feature_names]
# Replace NaN values (at this point due to some columns missing from some trace files)
if self.feature_df.isna().any().any():
self.feature_df = self.feature_df.fillna(0)
return
def make_pjid(self, toolID, pjID):
"""Convert PJID format of catalog file to the one used in measurement files"""
return toolID + '-' + pjID.split('.')[0]
def convert_tracefilename(self, filepath):
"""
This processing depends on how tracefiles are stored (flat directory hierarchy or not, .csv or .zip)
See retrieve_tracefiles.py for options.
Here, a flat hierarchy and .csv format is assumed
"""
filename, extension = os.path.splitext(os.path.basename(filepath))
return filename + '.csv'
def get_measurements(self, wafer_measurements_path):
# There are 2 "types" of files, the ones that start with "Rate" and contain mean deposition rate (type 1)
# and the ones that start with "mCTF" (type 2),
# which only contain mean thickness and fewer columns with different ID and measurement column name.
deprate_df1 = self.load_all(
wafer_measurements_path, pattern="Rate_time_series.*_Average_", mode='simple')
deprate_df1 = deprate_df1.rename(
columns={"Mea_value": "mean_thickness"})
deprate_df2 = self.load_all(
wafer_measurements_path, pattern=r"/mCTF.*_Average_", mode='simple')
deprate_df2 = deprate_df2.rename(
columns={"Wafer_mean": "mean_thickness"})
# Merge the 2 types for deposition rate/thickness
deprate_df = pd.merge(deprate_df1, deprate_df2, how='outer', left_on=['Proc_cj_id', 'Wafer_id'],
right_on=['Control_job_id', 'Wafer_id'],
left_index=False, right_index=False, sort=True,
suffixes=(None, '_right'), copy=True, indicator=True,
validate=None)
# The 2 types contain overlapping sets of wafers, so we need to form a common measurement column
right_only = deprate_df['mean_thickness'].isnull()
deprate_df.loc[right_only,
'mean_thickness'] = deprate_df.loc[right_only, 'mean_thickness_right']
# Repeat process for roughness (std of thickness) measurement files
roughness_df1 = self.load_all(
wafer_measurements_path, pattern="Rate_time_series.*_StdDev_", mode='simple')
roughness_df1 = roughness_df1.rename(
columns={"Std_dep_thk": "std_thickness"})
roughness_df2 = self.load_all(
wafer_measurements_path, pattern=r"/mCTF.*_StdDev_", mode='simple')
roughness_df2 = roughness_df2.rename(
columns={"Wafer_std": "std_thickness"})
roughness_df = pd.merge(roughness_df1, roughness_df2, how='outer', left_on=['Proc_cj_id', 'Wafer_id'],
right_on=['Control_job_id', 'Wafer_id'],
left_index=False, right_index=False, sort=True,
suffixes=(None, '_right'), copy=True, indicator=True,
validate=None)
right_only = roughness_df['std_thickness'].isnull()
roughness_df.loc[right_only,
'std_thickness'] = roughness_df.loc[right_only, 'std_thickness_right']
# Dataframe which holds all measurements: mean thickness (and deposition rate for the subset "type 1"), roughness (std of thickness)
measurements_df = pd.merge(deprate_df, roughness_df, how='inner', on=['Proc_cj_id', 'Wafer_id'],
left_index=False, right_index=False, sort=True,
suffixes=('_x', '_y'), copy=True, indicator=False,
validate=None)
assert sum(measurements_df.mean_thickness.isnull()
) == 0, "Missing thickness measurements"
assert sum(measurements_df.std_thickness.isnull()
) == 0, "Missing roughness measurements"
return measurements_df
def get_metadata(self, catalog_path, measurements_df):
catalog_df = pd.read_csv(catalog_path)
# Restrict to Chamber 2
catalog_df = catalog_df[catalog_df['ChamberID'] == 'CH2']
# Restrict to the recipes corresponding to existing measurememt wafers and associated product wafers
catalog_df = catalog_df[catalog_df['ChamberRecipeID'].isin(
['QUALCH2CO20', 'QUALCH2CO100', 'CH2_G0009'])]
catalog_df['pjid'] = catalog_df[['ToolID', 'PJID']].apply(
lambda x: self.make_pjid(*x), axis=1)
# This dataframe holds all metadata per wafer (including measurements, when they exist, and corresponding trace file)
metadata_df = pd.merge(catalog_df, measurements_df, how='left', left_on=['pjid', 'WaferID'],
right_on=['Proc_cj_id', 'Wafer_id'],
left_index=False, right_index=False, sort=True,
suffixes=('_x', '_y'), copy=True, indicator=False,
validate=None)
metadata_df = metadata_df.set_index('WaferPassID')
return metadata_df
def load_all(self, root_dir, file_list=None, pattern=None, mode=None):
"""
Loads datasets from csv files contained in `root_dir` into a dataframe, optionally choosing from `pattern`
Args:
root_dir: directory containing all individual .csv files
file_list: optionally, provide a list of file paths within `root_dir` to consider.
Otherwise, entire `root_dir` contents will be used.
pattern: optionally, apply regex string to select subset of files
func: function to use for loading a single file
Returns:
all_df: a single (possibly concatenated) dataframe with all data corresponding to specified files
"""
# if func is None:
# func = SemicondTraceData.load_single
# Select paths for training and evaluation
if file_list is None:
data_paths = glob.glob(os.path.join(
root_dir, '*')) # list of all paths
else:
data_paths = [os.path.join(root_dir, p) for p in file_list]
if len(data_paths) == 0:
raise Exception('No files found using: {}'.format(
os.path.join(root_dir, '*')))
if pattern is None:
# by default evaluate on
selected_paths = data_paths
else:
selected_paths = list(
filter(lambda x: re.search(pattern, x), data_paths))
input_paths = [p for p in selected_paths if os.path.isfile(
p) and p.endswith('.csv')]
if len(input_paths) == 0:
raise Exception(
"No .csv files found using pattern: '{}'".format(pattern))
if (mode != 'simple') and (self.n_proc > 1):
# Load in parallel
# no more than file_names needed here
_n_proc = min(self.n_proc, len(input_paths))
logger.info("Loading {} datasets files using {} parallel processes ...".format(
len(input_paths), _n_proc))
with Pool(processes=_n_proc) as pool:
# done like this because multiprocessing needs the *explicit* function call
# and not a reference to a function, e.g. func = pd.read_csv
all_df = pd.concat(
pool.map(SemicondTraceData.load_single, input_paths))
else: # read 1 file at a time
if mode == 'simple':
all_df = pd.concat(pd.read_csv(path)
for path in tqdm(input_paths))
else:
all_df = pd.concat(SemicondTraceData.load_single(path)
for path in tqdm(input_paths))
return all_df
@staticmethod
def load_single(filepath):
df = SemicondTraceData.read_data(filepath)
df = SemicondTraceData.select_columns(df)
df['TimeStamp'] = pd.to_datetime(df['TimeStamp'])
df = df.sort_values(by=['WaferPassID', 'TimeStamp'])
df = df.set_index('WaferPassID')
# Replace NaN values (at this point, these are missing values in a variable/sequence)
# because some columns are missing in some tracefiles
feat_col = [
col for col in df.columns if col in SemicondTraceData.features]
if df[feat_col].isna().any().any():
grp = df.groupby(by=df.index)
df.loc[:, feat_col] = grp.transform(interpolate_missing)
return df
@staticmethod
def read_data(filepath):
"""Reads a single .csv, which typically contains a day of datasets of various weld sessions.
"""
df = pd.read_csv(filepath)
return df
@staticmethod
def select_columns(df):
# Kept just as an example
# df = df.rename(columns={"per_energy": "power"})
# # Sometimes 'diff_time' is not measured correctly (is 0), and power ('per_energy') becomes infinite
# is_error = df['power'] > 1e16
# df.loc[is_error, 'power'] = df.loc[is_error, 'true_energy'] / df['diff_time'].median()
# keep_cols = ['weld_record_index', 'wire_feed_speed', 'current', 'voltage', 'motor_current', 'power']
# df = df[keep_cols]
# This doesn't work because some columns are missing in some tracefiles
# keep_cols = ['WaferPassID', 'TimeStamp'] + SemicondTraceData.features
# df = df[keep_cols]
return df
class PMUData(BaseData):
"""
Dataset class for Phasor Measurement Unit dataset.
Attributes:
all_df: dataframe indexed by ID, with multiple rows corresponding to the same index (sample).
Each row is a time step; Each column contains either metadata (e.g. timestamp) or a feature.
feature_df: contains the subset of columns of `all_df` which correspond to selected features
feature_names: names of columns contained in `feature_df` (same as feature_df.columns)
all_IDs: IDs contained in `all_df`/`feature_df` (same as all_df.index.unique() )
max_seq_len: maximum sequence (time series) length (optional). Used only if script argument `max_seq_len` is not
defined.
"""
def __init__(self, root_dir, file_list=None, pattern=None, n_proc=1, limit_size=None, config=None):
self.set_num_processes(n_proc=n_proc)
self.all_df = self.load_all(
root_dir, file_list=file_list, pattern=pattern)
if config['data_window_len'] is not None:
self.max_seq_len = config['data_window_len']
# construct sample IDs: 0, 0, ..., 0, 1, 1, ..., 1, 2, ..., (num_whole_samples - 1)
# num_whole_samples = len(self.all_df) // self.max_seq_len # commented code is for more general IDs
# IDs = list(chain.from_iterable(map(lambda x: repeat(x, self.max_seq_len), range(num_whole_samples + 1))))
# IDs = IDs[:len(self.all_df)] # either last sample is completely superfluous, or it has to be shortened
IDs = [i // self.max_seq_len for i in range(self.all_df.shape[0])]
self.all_df.insert(loc=0, column='ExID', value=IDs)
else:
# self.all_df = self.all_df.sort_values(by=['ExID']) # dataset is presorted
self.max_seq_len = 30
self.all_df = self.all_df.set_index('ExID')
# rename columns
self.all_df.columns = [re.sub(r'\d+', str(i//3), col_name)
for i, col_name in enumerate(self.all_df.columns[:])]
#self.all_df.columns = ["_".join(col_name.split(" ")[:-1]) for col_name in self.all_df.columns[:]]
self.all_IDs = self.all_df.index.unique() # all sample (session) IDs
if limit_size is not None:
if limit_size > 1:
limit_size = int(limit_size)
else: # interpret as proportion if in (0, 1]
limit_size = int(limit_size * len(self.all_IDs))
self.all_IDs = self.all_IDs[:limit_size]
self.all_df = self.all_df.loc[self.all_IDs]
self.feature_names = self.all_df.columns # all columns are used as features
self.feature_df = self.all_df[self.feature_names]
def load_all(self, root_dir, file_list=None, pattern=None):
"""
Loads datasets from csv files contained in `root_dir` into a dataframe, optionally choosing from `pattern`
Args:
root_dir: directory containing all individual .csv files
file_list: optionally, provide a list of file paths within `root_dir` to consider.
Otherwise, entire `root_dir` contents will be used.
pattern: optionally, apply regex string to select subset of files
Returns:
all_df: a single (possibly concatenated) dataframe with all data corresponding to specified files
"""
# Select paths for training and evaluation
if file_list is None:
data_paths = glob.glob(os.path.join(
root_dir, '*')) # list of all paths
else:
data_paths = [os.path.join(root_dir, p) for p in file_list]
if len(data_paths) == 0:
raise Exception('No files found using: {}'.format(
os.path.join(root_dir, '*')))
if pattern is None:
# by default evaluate on
selected_paths = data_paths
else:
selected_paths = list(
filter(lambda x: re.search(pattern, x), data_paths))
input_paths = [p for p in selected_paths if os.path.isfile(
p) and p.endswith('.csv')]
if len(input_paths) == 0:
raise Exception(
"No .csv files found using pattern: '{}'".format(pattern))
if self.n_proc > 1:
# Load in parallel
# no more than file_names needed here
_n_proc = min(self.n_proc, len(input_paths))
logger.info("Loading {} datasets files using {} parallel processes ...".format(
len(input_paths), _n_proc))
with Pool(processes=_n_proc) as pool:
all_df = pd.concat(pool.map(PMUData.load_single, input_paths))
else: # read 1 file at a time
all_df = pd.concat(PMUData.load_single(path)
for path in input_paths)
return all_df
@staticmethod
def load_single(filepath):
df = PMUData.read_data(filepath)
#df = PMUData.select_columns(df)
num_nan = df.isna().sum().sum()
if num_nan > 0:
logger.warning(
"{} nan values in {} will be replaced by 0".format(num_nan, filepath))
df = df.fillna(0)
return df
@staticmethod
def read_data(filepath):
"""Reads a single .csv, which typically contains a day of datasets of various weld sessions.
"""
df = pd.read_csv(filepath)
return df
data_factory = {'weld': WeldData,
'hdd': HDD_data,
'tsra': TSRegressionArchive,
'semicond': SemicondTraceData,
'pmu': PMUData}
================================================
FILE: ts_classification_methods/tst_cls/src/datasets/dataset.py
================================================
import numpy as np
from torch.utils.data import Dataset
import torch
class ImputationDataset(Dataset):
"""Dynamically computes missingness (noise) mask for each sample"""
def __init__(self, data, indices, mean_mask_length=3, masking_ratio=0.15,
mode='separate', distribution='geometric', exclude_feats=None, device=None, feature_df=None):
super(ImputationDataset, self).__init__()
self.data = data # this is a subclass of the BaseData class in data.py
self.IDs = indices # list of data IDs, but also mapping between integer index and ID
if feature_df is not None:
self.feature_df = feature_df
else:
self.feature_df = self.data.feature_df.loc[self.IDs]
self.masking_ratio = masking_ratio
self.mean_mask_length = mean_mask_length
self.mode = mode
self.distribution = distribution
self.exclude_feats = exclude_feats
self.device = device
def __getitem__(self, ind):
"""
For a given integer index, returns the corresponding (seq_length, feat_dim) array and a noise mask of same shape
Args:
ind: integer index of sample in dataset
Returns:
X: (seq_length, feat_dim) tensor of the multivariate time series corresponding to a sample
mask: (seq_length, feat_dim) boolean tensor: 0s mask and predict, 1s: unaffected input
ID: ID of sample
"""
# X = self.feature_df.loc[self.IDs[ind]].values # (seq_length, feat_dim) array
X = self.feature_df.loc[ind].values
mask = noise_mask(X, self.masking_ratio, self.mean_mask_length, self.mode, self.distribution,
self.exclude_feats) # (seq_length, feat_dim) boolean array
return torch.from_numpy(X), torch.from_numpy(mask), self.IDs[ind]
def update(self):
self.mean_mask_length = min(20, self.mean_mask_length + 1)
self.masking_ratio = min(1, self.masking_ratio + 0.05)
def __len__(self):
return len(self.IDs)
class TransductionDataset(Dataset):
def __init__(self, data, indices, mask_feats, start_hint=0.0, end_hint=0.0):
super(TransductionDataset, self).__init__()
self.data = data # this is a subclass of the BaseData class in data.py
self.IDs = indices # list of data IDs, but also mapping between integer index and ID
self.feature_df = self.data.feature_df.loc[self.IDs]
# list/array of indices corresponding to features to be masked
self.mask_feats = mask_feats
# proportion at beginning of time series which will not be masked
self.start_hint = start_hint
# end_hint: proportion at the end of time series which will not be masked
self.end_hint = end_hint
def __getitem__(self, ind):
"""
For a given integer index, returns the corresponding (seq_length, feat_dim) array and a noise mask of same shape
Args:
ind: integer index of sample in dataset
Returns:
X: (seq_length, feat_dim) tensor of the multivariate time series corresponding to a sample
mask: (seq_length, feat_dim) boolean tensor: 0s mask and predict, 1s: unaffected input
ID: ID of sample
"""
# (seq_length, feat_dim) array
X = self.feature_df.loc[self.IDs[ind]].values
mask = transduct_mask(X, self.mask_feats, self.start_hint,
self.end_hint) # (seq_length, feat_dim) boolean array
return torch.from_numpy(X), torch.from_numpy(mask), self.IDs[ind]
def update(self):
self.start_hint = max(0, self.start_hint - 0.1)
self.end_hint = max(0, self.end_hint - 0.1)
def __len__(self):
return len(self.IDs)
def collate_superv(data, max_len=None, device=None):
"""Build mini-batch tensors from a list of (X, mask) tuples. Mask input. Create
Args:
data: len(batch_size) list of tuples (X, y).
- X: torch tensor of shape (seq_length, feat_dim); variable seq_length.
- y: torch tensor of shape (num_labels,) : class indices or numerical targets
(for classification or regression, respectively). num_labels > 1 for multi-task models
max_len: global fixed sequence length. Used for architectures requiring fixed length input,
where the batch length cannot vary dynamically. Longer sequences are clipped, shorter are padded with 0s
Returns:
X: (batch_size, padded_length, feat_dim) torch tensor of masked features (input)
targets: (batch_size, padded_length, feat_dim) torch tensor of unmasked features (output)
target_masks: (batch_size, padded_length, feat_dim) boolean torch tensor
0 indicates masked values to be predicted, 1 indicates unaffected/"active" feature values
padding_masks: (batch_size, padded_length) boolean tensor, 1 means keep vector at this position, 0 means padding
"""
batch_size = len(data)
features, labels, IDs = zip(*data) # origin: , IDs
# Stack and pad features and masks (convert 2D to 3D tensors, i.e. add batch dimension)
# original sequence length for each time series
lengths = [X.shape[0] for X in features]
if max_len is None:
max_len = max(lengths)
# (batch_size, padded_length, feat_dim)
X = torch.zeros(batch_size, max_len, features[0].shape[-1])
for i in range(batch_size):
end = min(lengths[i], max_len)
X[i, :end, :] = features[i][:end, :]
targets = torch.stack(labels, dim=0) # (batch_size, num_labels)
padding_masks = padding_mask(torch.tensor(lengths, dtype=torch.int16),
max_len=max_len) # (batch_size, padded_length) boolean tensor, "1" means keep
return X, targets, padding_masks, IDs
class ClassiregressionDataset(Dataset):
def __init__(self, data, indices, device=None, feature_df=None):
super(ClassiregressionDataset, self).__init__()
self.data = data # this is a subclass of the BaseData class in data.py
self.IDs = indices # list of data IDs, but also mapping between integer index and ID
if feature_df is None:
self.feature_df = self.data.feature_df.loc[self.IDs]
else:
self.feature_df = feature_df
self.labels_df = self.data.labels_df.loc[self.IDs]
self.device = device
num_data = len(self.IDs)
'''
self.flatten_x = torch.from_numpy(np.array(self.feature_df).reshape((num_data, -1))).to(self.device)
self.flatten_y = torch.from_numpy(np.array(self.labels_df.loc[indices]).reshape((num_data,))).to(self.device)
'''
def __getitem__(self, ind):
"""
For a given integer index, returns the corresponding (seq_length, feat_dim) array and a noise mask of same shape
Args:
ind: integer index of sample in dataset
Returns:
X: (seq_length, feat_dim) tensor of the multivariate time series corresponding to a sample
y: (num_labels,) tensor of labels (num_labels > 1 for multi-task models) for each sample
ID: ID of sample
"""
# X = self.feature_df.loc[self.IDs[ind]].values # (seq_length, feat_dim) array
X = self.feature_df.loc[ind].values
y = self.labels_df.loc[self.IDs[ind]].values # (num_labels,) array
return torch.from_numpy(X), torch.from_numpy(y), self.IDs[ind]
# return self.flatten_x[ind], self.flatten_y[ind]
def __len__(self):
return len(self.IDs)
def transduct_mask(X, mask_feats, start_hint=0.0, end_hint=0.0):
"""
Creates a boolean mask of the same shape as X, with 0s at places where a feature should be masked.
Args:
X: (seq_length, feat_dim) numpy array of features corresponding to a single sample
mask_feats: list/array of indices corresponding to features to be masked
start_hint:
end_hint: proportion at the end of time series which will not be masked
Returns:
boolean numpy array with the same shape as X, with 0s at places where a feature should be masked
"""
mask = np.ones(X.shape, dtype=bool)
start_ind = int(start_hint * X.shape[0])
end_ind = max(start_ind, int((1 - end_hint) * X.shape[0]))
mask[start_ind:end_ind, mask_feats] = 0
return mask
def compensate_masking(X, mask):
"""
Compensate feature vectors after masking values, in a way that the matrix product W @ X would not be affected on average.
If p is the proportion of unmasked (active) elements, X' = X / p = X * feat_dim/num_active
Args:
X: (batch_size, seq_length, feat_dim) torch tensor
mask: (batch_size, seq_length, feat_dim) torch tensor: 0s means mask and predict, 1s: unaffected (active) input
Returns:
(batch_size, seq_length, feat_dim) compensated features
"""
# number of unmasked elements of feature vector for each time step
# (batch_size, seq_length, 1)
num_active = torch.sum(mask, dim=-1).unsqueeze(-1)
# to avoid division by 0, set the minimum to 1
num_active = torch.max(num_active, torch.ones(
num_active.shape, dtype=torch.int16)) # (batch_size, seq_length, 1)
return X.shape[-1] * X / num_active
def collate_unsuperv(data, max_len=None, mask_compensation=False):
"""Build mini-batch tensors from a list of (X, mask) tuples. Mask input. Create
Args:
data: len(batch_size) list of tuples (X, mask).
- X: torch tensor of shape (seq_length, feat_dim); variable seq_length.
- mask: boolean torch tensor of shape (seq_length, feat_dim); variable seq_length.
max_len: global fixed sequence length. Used for architectures requiring fixed length input,
where the batch length cannot vary dynamically. Longer sequences are clipped, shorter are padded with 0s
Returns:
X: (batch_size, padded_length, feat_dim) torch tensor of masked features (input)
targets: (batch_size, padded_length, feat_dim) torch tensor of unmasked features (output)
target_masks: (batch_size, padded_length, feat_dim) boolean torch tensor
0 indicates masked values to be predicted, 1 indicates unaffected/"active" feature values
padding_masks: (batch_size, padded_length) boolean tensor, 1 means keep vector at this position, 0 ignore (padding)
"""
batch_size = len(data)
features, masks, IDs = zip(*data)
# Stack and pad features and masks (convert 2D to 3D tensors, i.e. add batch dimension)
# original sequence length for each time series
lengths = [X.shape[0] for X in features]
if max_len is None:
max_len = max(lengths)
# (batch_size, padded_length, feat_dim)
X = torch.zeros(batch_size, max_len, features[0].shape[-1])
target_masks = torch.zeros_like(X,
dtype=torch.bool) # (batch_size, padded_length, feat_dim) masks related to objective
for i in range(batch_size):
end = min(lengths[i], max_len)
X[i, :end, :] = features[i][:end, :]
target_masks[i, :end, :] = masks[i][:end, :]
targets = X.clone()
X = X * target_masks # mask input
if mask_compensation:
X = compensate_masking(X, target_masks)
# (batch_size, padded_length) boolean tensor, "1" means keep
padding_masks = padding_mask(torch.tensor(
lengths, dtype=torch.int16), max_len=max_len)
target_masks = ~target_masks # inverse logic: 0 now means ignore, 1 means predict
return X, targets, target_masks, padding_masks, IDs
def noise_mask(X, masking_ratio, lm=3, mode='separate', distribution='geometric', exclude_feats=None):
"""
Creates a random boolean mask of the same shape as X, with 0s at places where a feature should be masked.
Args:
X: (seq_length, feat_dim) numpy array of features corresponding to a single sample
masking_ratio: proportion of seq_length to be masked. At each time step, will also be the proportion of
feat_dim that will be masked on average
lm: average length of masking subsequences (streaks of 0s). Used only when `distribution` is 'geometric'.
mode: whether each variable should be masked separately ('separate'), or all variables at a certain positions
should be masked concurrently ('concurrent')
distribution: whether each mask sequence element is sampled independently at random, or whether
sampling follows a markov chain (and thus is stateful), resulting in geometric distributions of
masked squences of a desired mean length `lm`
exclude_feats: iterable of indices corresponding to features to be excluded from masking (i.e. to remain all 1s)
Returns:
boolean numpy array with the same shape as X, with 0s at places where a feature should be masked
"""
if exclude_feats is not None:
exclude_feats = set(exclude_feats)
if distribution == 'geometric': # stateful (Markov chain)
if mode == 'separate': # each variable (feature) is independent
mask = np.ones(X.shape, dtype=bool)
for m in range(X.shape[1]): # feature dimension
if exclude_feats is None or m not in exclude_feats:
mask[:, m] = geom_noise_mask_single(
X.shape[0], lm, masking_ratio) # time dimension
# replicate across feature dimension (mask all variables at the same positions concurrently)
else:
mask = np.tile(np.expand_dims(geom_noise_mask_single(
X.shape[0], lm, masking_ratio), 1), X.shape[1])
else: # each position is independent Bernoulli with p = 1 - masking_ratio
if mode == 'separate':
mask = np.random.choice(np.array([True, False]), size=X.shape, replace=True,
p=(1 - masking_ratio, masking_ratio))
else:
mask = np.tile(np.random.choice(np.array([True, False]), size=(X.shape[0], 1), replace=True,
p=(1 - masking_ratio, masking_ratio)), X.shape[1])
return mask
def geom_noise_mask_single(L, lm, masking_ratio):
"""
Randomly create a boolean mask of length `L`, consisting of subsequences of average length lm, masking with 0s a `masking_ratio`
proportion of the sequence L. The length of masking subsequences and intervals follow a geometric distribution.
Args:
L: length of mask and sequence to be masked
lm: average length of masking subsequences (streaks of 0s)
masking_ratio: proportion of L to be masked
Returns:
(L,) boolean numpy array intended to mask ('drop') with 0s a sequence of length L
"""
keep_mask = np.ones(L, dtype=bool)
# probability of each masking sequence stopping. parameter of geometric distribution.
p_m = 1 / lm
# probability of each unmasked sequence stopping. parameter of geometric distribution.
p_u = p_m * masking_ratio / (1 - masking_ratio)
p = [p_m, p_u]
# Start in state 0 with masking_ratio probability
# state 0 means masking, 1 means not masking
state = int(np.random.rand() > masking_ratio)
for i in range(L):
# here it happens that state and masking value corresponding to state are identical
keep_mask[i] = state
if np.random.rand() < p[state]:
state = 1 - state
return keep_mask
def padding_mask(lengths, max_len=None):
"""
Used to mask padded positions: creates a (batch_size, max_len) boolean mask from a tensor of sequence lengths,
where 1 means keep element at this position (time step)
"""
batch_size = lengths.numel()
# trick works because of overloading of 'or' operator for non-boolean types
max_len = max_len or lengths.max_val()
return (torch.arange(0, max_len, device=lengths.device)
.type_as(lengths)
.repeat(batch_size, 1)
.lt(lengths.unsqueeze(1)))
================================================
FILE: ts_classification_methods/tst_cls/src/datasets/datasplit.py
================================================
import numpy as np
from sklearn import model_selection
def split_dataset(data_indices, validation_method, n_splits, validation_ratio, test_set_ratio=0,
test_indices=None,
random_seed=42, labels=None, ith=None):
"""
Splits dataset (i.e. the global datasets indices) into a test set and a training/validation set.
The training/validation set is used to produce `n_splits` different configurations/splits of indices.
Returns:
test_indices: numpy array containing the global datasets indices corresponding to the test set
(empty if test_set_ratio is 0 or None)
train_indices: iterable of `n_splits` (num. of folds) numpy arrays,
each array containing the global datasets indices corresponding to a fold's training set
val_indices: iterable of `n_splits` (num. of folds) numpy arrays,
each array containing the global datasets indices corresponding to a fold's validation set
"""
# Set aside test set, if explicitly defined
'''
if test_indices is not None:
data_indices = np.array([ind for ind in data_indices if ind not in set(test_indices)]) # to keep initial order
'''
datasplitter = DataSplitter.factory(
validation_method, data_indices, labels, ith) # DataSplitter object
# Set aside a random partition of all data as a test set
'''
if test_indices is None:
#if test_set_ratio: # only if test set not explicitly defined
datasplitter.split_testset(test_ratio=test_set_ratio, random_state=random_seed)
test_indices = datasplitter.test_indices
else:
test_indices = []
'''
# Split train / validation sets
# TODO directly split the test set, ignore the val set
datasplitter.split_testset(
test_ratio=test_set_ratio, random_state=random_seed)
datasplitter.split_validation(
n_splits, validation_ratio, random_state=random_seed)
return datasplitter.train_indices, datasplitter.val_indices, datasplitter.test_indices
class DataSplitter(object):
"""Factory class, constructing subclasses based on feature type"""
def __init__(self, data_indices, data_labels=None, ith=None):
"""data_indices = train_val_indices | test_indices"""
self.data_indices = data_indices # global datasets indices
self.data_labels = data_labels # global raw datasets labels
# global non-test indices (training and validation)
self.train_val_indices = np.copy(self.data_indices)
self.test_indices = [] # global test indices
self.ith = ith
if data_labels is not None:
self.train_val_labels = np.copy(
self.data_labels) # global non-test labels (includes training and validation)
self.test_labels = [] # global test labels # TODO: maybe not needed
@staticmethod
def factory(split_type, *args, **kwargs):
if split_type == "StratifiedShuffleSplit":
return StratifiedShuffleSplitter(*args, **kwargs)
if split_type == "ShuffleSplit":
return ShuffleSplitter(*args, **kwargs)
if split_type == "StratifiedKFold":
return StratifiedKFoldSplitter(*args, **kwargs)
else:
raise ValueError(
"DataSplitter for '{}' does not exist".format(split_type))
def split_testset(self, test_ratio, random_state=1337):
"""
Input:
test_ratio: ratio of test set with respect to the entire dataset. Should result in an absolute number of
samples which is greater or equal to the number of classes
Returns:
test_indices: numpy array containing the global datasets indices corresponding to the test set
test_labels: numpy array containing the labels corresponding to the test set
"""
raise NotImplementedError("Please override function in child class")
def split_validation(self):
"""
Returns:
train_indices: iterable of n_splits (num. of folds) numpy arrays,
each array containing the global datasets indices corresponding to a fold's training set
val_indices: iterable of n_splits (num. of folds) numpy arrays,
each array containing the global datasets indices corresponding to a fold's validation set
"""
raise NotImplementedError("Please override function in child class")
# TODO add a k-fold, 在splitdataset里加一个index,每次运行取数据都会保证取到第index份
# 1.测试设置random_state是否能保证每次的shuffle结果一致;
# 2.在argparser中加入一个index参数,即每个数据集用5个脚本训练
# 3.写完StratifiedKFold并在splitdataset里加index
# 4. ith starts from 0
'''
remove the ith
'''
class StratifiedKFoldSplitter(DataSplitter):
def split_testset(self, test_ratio, random_state=42):
splitter = model_selection.StratifiedKFold(
n_splits=5, shuffle=True, random_state=random_state)
train_val_indices = None
test_indices = None
for i, (raw, test) in enumerate(splitter.split(X=np.zeros(len(self.data_indices)), y=self.data_labels)):
train_val_indices = np.array(raw, dtype=np.int64)
test_indices = np.array(test, dtype=np.int64)
if self.ith == i:
break
self.data_labels = np.array(self.data_labels)
self.data_indices = np.array(self.data_indices)
train_val_indices = np.array(train_val_indices)
test_indices = np.array(test_indices)
self.train_val_indices, self.train_val_labels = self.data_indices[
train_val_indices], self.data_labels[train_val_indices]
self.test_indices, self.test_labels = self.data_indices[
test_indices], self.data_labels[test_indices]
return
def split_validation(self, n_splits, validation_ratio, random_state=42):
splitter = model_selection.StratifiedKFold(
n_splits=4, shuffle=True, random_state=random_state)
'''
train_indices, val_indices = next(splitter.split(X=np.zeros(len(self.train_val_labels)), y=self.train_val_labels))
train_indices = np.array(train_indices, dtype=np.int64)
val_indices = np.array(val_indices, dtype=np.int64)
self.train_indices = self.train_val_indices[train_indices]
self.val_indices = self.train_val_indices[val_indices]
'''
train_indices, val_indices = zip(
*splitter.split(X=np.zeros(len(self.train_val_labels)), y=self.train_val_labels))
# return global datasets indices per fold
#print(train_indices[0].shape, train_indices[1].shape, train_indices[2].shape, train_indices[3].shape)
self.train_indices = self.train_val_indices[train_indices[0]]
self.val_indices = self.train_val_indices[val_indices[0]]
return
class StratifiedShuffleSplitter(DataSplitter):
"""
Returns randomized shuffled folds, which preserve the class proportions of samples in each fold. Differs from k-fold
in that not all samples are evaluated, and samples may be shared across validation sets,
which becomes more probable proportionally to validation_ratio/n_splits.
"""
def split_testset(self, test_ratio, random_state=1337):
"""
Input:
test_ratio: ratio of test set with respect to the entire dataset. Should result in an absolute number of
samples which is greater or equal to the number of classes
Returns:
test_indices: numpy array containing the global datasets indices corresponding to the test set
test_labels: numpy array containing the labels corresponding to the test set
"""
splitter = model_selection.StratifiedShuffleSplit(
n_splits=1, test_size=test_ratio, random_state=random_state)
# get local indices, i.e. indices in [0, len(data_labels))
train_val_indices, test_indices = next(splitter.split(
X=np.zeros(len(self.data_indices)), y=self.data_labels))
# return global datasets indices and labels
self.train_val_indices, self.train_val_labels = self.data_indices[
train_val_indices], self.data_labels[train_val_indices]
self.test_indices, self.test_labels = self.data_indices[
test_indices], self.data_labels[test_indices]
return
def split_validation(self, n_splits, validation_ratio, random_state=1337):
"""
Input:
n_splits: number of different, randomized and independent from one-another folds
validation_ratio: ratio of validation set with respect to the entire dataset. Should result in an absolute number of
samples which is greater or equal to the number of classes
Returns:
train_indices: iterable of n_splits (num. of folds) numpy arrays,
each array containing the global datasets indices corresponding to a fold's training set
val_indices: iterable of n_splits (num. of folds) numpy arrays,
each array containing the global datasets indices corresponding to a fold's validation set
"""
splitter = model_selection.StratifiedShuffleSplit(n_splits=n_splits, test_size=validation_ratio,
random_state=random_state)
# get local indices, i.e. indices in [0, len(train_val_labels)), per fold
train_indices, val_indices = zip(
*splitter.split(X=np.zeros(len(self.train_val_labels)), y=self.train_val_labels))
# return global datasets indices per fold
self.train_indices = [self.train_val_indices[fold_indices]
for fold_indices in train_indices]
self.val_indices = [self.train_val_indices[fold_indices]
for fold_indices in val_indices]
return
class ShuffleSplitter(DataSplitter):
"""
Returns randomized shuffled folds without requiring or taking into account the sample labels. Differs from k-fold
in that not all samples are evaluated, and samples may be shared across validation sets,
which becomes more probable proportionally to validation_ratio/n_splits.
"""
def split_testset(self, test_ratio, random_state=1337):
"""
Input:
test_ratio: ratio of test set with respect to the entire dataset. Should result in an absolute number of
samples which is greater or equal to the number of classes
Returns:
test_indices: numpy array containing the global datasets indices corresponding to the test set
test_labels: numpy array containing the labels corresponding to the test set
"""
splitter = model_selection.ShuffleSplit(
n_splits=1, test_size=test_ratio, random_state=random_state)
# get local indices, i.e. indices in [0, len(data_indices))
train_val_indices, test_indices = next(
splitter.split(X=np.zeros(len(self.data_indices))))
# return global datasets indices and labels
self.train_val_indices = self.data_indices[train_val_indices]
self.test_indices = self.data_indices[test_indices]
if self.data_labels is not None:
self.train_val_labels = self.data_labels[train_val_indices]
self.test_labels = self.data_labels[test_indices]
return
def split_validation(self, n_splits, validation_ratio, random_state=1337):
"""
Input:
n_splits: number of different, randomized and independent from one-another folds
validation_ratio: ratio of validation set with respect to the entire dataset. Should result in an absolute number of
samples which is greater or equal to the number of classes
Returns:
train_indices: iterable of n_splits (num. of folds) numpy arrays,
each array containing the global datasets indices corresponding to a fold's training set
val_indices: iterable of n_splits (num. of folds) numpy arrays,
each array containing the global datasets indices corresponding to a fold's validation set
"""
splitter = model_selection.ShuffleSplit(n_splits=n_splits, test_size=validation_ratio,
random_state=random_state)
# get local indices, i.e. indices in [0, len(train_val_labels)), per fold
train_indices, val_indices = zip(
*splitter.split(X=np.zeros(len(self.train_val_indices))))
# return global datasets indices per fold
self.train_indices = [self.train_val_indices[fold_indices]
for fold_indices in train_indices]
self.val_indices = [self.train_val_indices[fold_indices]
for fold_indices in val_indices]
return
================================================
FILE: ts_classification_methods/tst_cls/src/datasets/utils.py
================================================
"""
Code to load Time Series Regression datasets. From:
https://github.com/ChangWeiTan/TSRegression/blob/master/utils
"""
import numpy as np
import pandas as pd
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from tqdm import tqdm
regression_datasets = ["AustraliaRainfall",
"HouseholdPowerConsumption1",
"HouseholdPowerConsumption2",
"BeijingPM25Quality",
"BeijingPM10Quality",
"Covid3Month",
"LiveFuelMoistureContent",
"FloodModeling1",
"FloodModeling2",
"FloodModeling3",
"AppliancesEnergy",
"BenzeneConcentration",
"NewsHeadlineSentiment",
"NewsTitleSentiment",
"BIDMC32RR",
"BIDMC32HR",
"BIDMC32SpO2",
"IEEEPPG",
"PPGDalia"]
def uniform_scaling(data, max_len):
"""
This is a function to scale the time series uniformly
:param data:
:param max_len:
:return:
"""
seq_len = len(data)
scaled_data = [data[int(j * seq_len / max_len)] for j in range(max_len)]
return scaled_data
# The following code is adapted from the python package sktime to read .ts file.
class TsFileParseException(Exception):
"""
Should be raised when parsing a .ts file and the format is incorrect.
"""
pass
def load_from_tsfile_to_dataframe(full_file_path_and_name, return_separate_X_and_y=True,
replace_missing_vals_with='NaN'):
"""Loads data from a .ts file into a Pandas DataFrame.
Parameters
----------
full_file_path_and_name: str
The full pathname of the .ts file to read.
return_separate_X_and_y: bool
true if X and Y values should be returned as separate Data Frames (X) and a numpy array (y), false otherwise.
This is only relevant for data that
replace_missing_vals_with: str
The value that missing values in the text file should be replaced with prior to parsing.
Returns
-------
DataFrame, ndarray
If return_separate_X_and_y then a tuple containing a DataFrame and a numpy array containing the relevant time-series and corresponding class values.
DataFrame
If not return_separate_X_and_y then a single DataFrame containing all time-series and (if relevant) a column "class_vals" the associated class values.
"""
# Initialize flags and variables used when parsing the file
metadata_started = False
data_started = False
has_problem_name_tag = False
has_timestamps_tag = False
has_univariate_tag = False
has_class_labels_tag = False
has_target_labels_tag = False
has_data_tag = False
previous_timestamp_was_float = None
previous_timestamp_was_int = None
previous_timestamp_was_timestamp = None
num_dimensions = None
is_first_case = True
instance_list = []
class_val_list = []
line_num = 0
# Parse the file
# print(full_file_path_and_name)
with open(full_file_path_and_name, 'r', encoding='utf-8') as file:
for line in tqdm(file):
# print(".", end='')
# Strip white space from start/end of line and change to lowercase for use below
line = line.strip().lower()
# Empty lines are valid at any point in a file
if line:
# Check if this line contains metadata
# Please note that even though metadata is stored in this function it is not currently published externally
if line.startswith("@problemname"):
# Check that the data has not started
if data_started:
raise TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise TsFileParseException("problemname tag requires an associated value")
problem_name = line[len("@problemname") + 1:]
has_problem_name_tag = True
metadata_started = True
elif line.startswith("@timestamps"):
# Check that the data has not started
if data_started:
raise TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise TsFileParseException("timestamps tag requires an associated Boolean value")
elif tokens[1] == "true":
timestamps = True
elif tokens[1] == "false":
timestamps = False
else:
raise TsFileParseException("invalid timestamps value")
has_timestamps_tag = True
metadata_started = True
elif line.startswith("@univariate"):
# Check that the data has not started
if data_started:
raise TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise TsFileParseException("univariate tag requires an associated Boolean value")
elif tokens[1] == "true":
univariate = True
elif tokens[1] == "false":
univariate = False
else:
raise TsFileParseException("invalid univariate value")
has_univariate_tag = True
metadata_started = True
elif line.startswith("@classlabel"):
# Check that the data has not started
if data_started:
raise TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise TsFileParseException("classlabel tag requires an associated Boolean value")
if tokens[1] == "true":
class_labels = True
elif tokens[1] == "false":
class_labels = False
else:
raise TsFileParseException("invalid classLabel value")
# Check if we have any associated class values
if token_len == 2 and class_labels:
raise TsFileParseException("if the classlabel tag is true then class values must be supplied")
has_class_labels_tag = True
class_label_list = [token.strip() for token in tokens[2:]]
metadata_started = True
elif line.startswith("@targetlabel"):
# Check that the data has not started
if data_started:
raise TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise TsFileParseException("targetlabel tag requires an associated Boolean value")
if tokens[1] == "true":
target_labels = True
elif tokens[1] == "false":
target_labels = False
else:
raise TsFileParseException("invalid targetLabel value")
has_target_labels_tag = True
class_val_list = []
metadata_started = True
# Check if this line contains the start of data
elif line.startswith("@data"):
if line != "@data":
raise TsFileParseException("data tag should not have an associated value")
if data_started and not metadata_started:
raise TsFileParseException("metadata must come before data")
else:
has_data_tag = True
data_started = True
# If the 'data tag has been found then metadata has been parsed and data can be loaded
elif data_started:
# Check that a full set of metadata has been provided
incomplete_regression_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_target_labels_tag or not has_data_tag
incomplete_classification_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_class_labels_tag or not has_data_tag
if incomplete_regression_meta_data and incomplete_classification_meta_data:
raise TsFileParseException("a full set of metadata has not been provided before the data")
# Replace any missing values with the value specified
line = line.replace("?", replace_missing_vals_with)
# Check if we dealing with data that has timestamps
if timestamps:
# We're dealing with timestamps so cannot just split line on ':' as timestamps may contain one
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
this_line_num_dimensions = 0
line_len = len(line)
char_num = 0
while char_num < line_len:
# Move through any spaces
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# See if there is any more data to read in or if we should validate that read thus far
if char_num < line_len:
# See if we have an empty dimension (i.e. no values)
if line[char_num] == ":":
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
has_another_value = False
has_another_dimension = True
timestamps_for_dimension = []
values_for_dimension = []
char_num += 1
else:
# Check if we have reached a class label
if line[char_num] != "(" and target_labels:
class_val = line[char_num:].strip()
# if class_val not in class_val_list:
# raise TsFileParseException(
# "the class value '" + class_val + "' on line " + str(
# line_num + 1) + " is not valid")
class_val_list.append(float(class_val))
char_num = line_len
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
else:
# Read in the data contained within the next tuple
if line[char_num] != "(" and not target_labels:
raise TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not start with a '('")
char_num += 1
tuple_data = ""
while char_num < line_len and line[char_num] != ")":
tuple_data += line[char_num]
char_num += 1
if char_num >= line_len or line[char_num] != ")":
raise TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not end with a ')'")
# Read in any spaces immediately after the current tuple
char_num += 1
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# Check if there is another value or dimension to process after this tuple
if char_num >= line_len:
has_another_value = False
has_another_dimension = False
elif line[char_num] == ",":
has_another_value = True
has_another_dimension = False
elif line[char_num] == ":":
has_another_value = False
has_another_dimension = True
char_num += 1
# Get the numeric value for the tuple by reading from the end of the tuple data backwards to the last comma
last_comma_index = tuple_data.rfind(',')
if last_comma_index == -1:
raise TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has no comma inside of it")
try:
value = tuple_data[last_comma_index + 1:]
value = float(value)
except ValueError:
raise TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that does not have a valid numeric value")
# Check the type of timestamp that we have
timestamp = tuple_data[0: last_comma_index]
try:
timestamp = int(timestamp)
timestamp_is_int = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_int = False
if not timestamp_is_int:
try:
timestamp = float(timestamp)
timestamp_is_float = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_float = False
if not timestamp_is_int and not timestamp_is_float:
try:
timestamp = timestamp.strip()
timestamp_is_timestamp = True
except ValueError:
timestamp_is_timestamp = False
# Make sure that the timestamps in the file (not just this dimension or case) are consistent
if not timestamp_is_timestamp and not timestamp_is_int and not timestamp_is_float:
raise TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has an invalid timestamp '" + timestamp + "'")
if previous_timestamp_was_float is not None and previous_timestamp_was_float and not timestamp_is_float:
raise TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_int is not None and previous_timestamp_was_int and not timestamp_is_int:
raise TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_timestamp is not None and previous_timestamp_was_timestamp and not timestamp_is_timestamp:
raise TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
# Store the values
timestamps_for_dimension += [timestamp]
values_for_dimension += [value]
# If this was our first tuple then we store the type of timestamp we had
if previous_timestamp_was_timestamp is None and timestamp_is_timestamp:
previous_timestamp_was_timestamp = True
previous_timestamp_was_int = False
previous_timestamp_was_float = False
if previous_timestamp_was_int is None and timestamp_is_int:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = True
previous_timestamp_was_float = False
if previous_timestamp_was_float is None and timestamp_is_float:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = False
previous_timestamp_was_float = True
# See if we should add the data for this dimension
if not has_another_value:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
if timestamp_is_timestamp:
timestamps_for_dimension = pd.DatetimeIndex(timestamps_for_dimension)
instance_list[this_line_num_dimensions].append(
pd.Series(index=timestamps_for_dimension, data=values_for_dimension))
this_line_num_dimensions += 1
timestamps_for_dimension = []
values_for_dimension = []
elif has_another_value:
raise TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series(dtype=np.float32))
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and not has_another_dimension:
if num_dimensions is None:
num_dimensions = this_line_num_dimensions
if num_dimensions != this_line_num_dimensions:
raise TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check that we are not expecting some more data, and if not, store that processed above
if has_another_value:
raise TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and num_dimensions != this_line_num_dimensions:
raise TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check if we should have class values, and if so that they are contained in those listed in the metadata
if target_labels and len(class_val_list) == 0:
raise TsFileParseException("the cases have no associated class values")
else:
dimensions = line.split(":")
# If first row then note the number of dimensions (that must be the same for all cases)
if is_first_case:
num_dimensions = len(dimensions)
if target_labels:
num_dimensions -= 1
for dim in range(0, num_dimensions):
instance_list.append([])
is_first_case = False
# See how many dimensions that the case whose data in represented in this line has
this_line_num_dimensions = len(dimensions)
if target_labels:
this_line_num_dimensions -= 1
# All dimensions should be included for all series, even if they are empty
if this_line_num_dimensions != num_dimensions:
raise TsFileParseException("inconsistent number of dimensions. Expecting " + str(
num_dimensions) + " but have read " + str(this_line_num_dimensions))
# Process the data for each dimension
for dim in range(0, num_dimensions):
dimension = dimensions[dim].strip()
if dimension:
data_series = dimension.split(",")
data_series = [float(i) for i in data_series]
instance_list[dim].append(pd.Series(data_series))
else:
instance_list[dim].append(pd.Series())
if target_labels:
class_val_list.append(float(dimensions[num_dimensions].strip()))
line_num += 1
# Check that the file was not empty
if line_num:
# Check that the file contained both metadata and data
complete_regression_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_target_labels_tag and has_data_tag
complete_classification_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_class_labels_tag and has_data_tag
if metadata_started and not complete_regression_meta_data and not complete_classification_meta_data:
raise TsFileParseException("metadata incomplete")
elif metadata_started and not data_started:
raise TsFileParseException("file contained metadata but no data")
elif metadata_started and data_started and len(instance_list) == 0:
raise TsFileParseException("file contained metadata but no data")
# Create a DataFrame from the data parsed above
data = pd.DataFrame(dtype=np.float32)
for dim in range(0, num_dimensions):
data['dim_' + str(dim)] = instance_list[dim]
# Check if we should return any associated class labels separately
if target_labels:
if return_separate_X_and_y:
return data, np.asarray(class_val_list)
else:
data['class_vals'] = pd.Series(class_val_list)
return data
else:
return data
else:
raise TsFileParseException("empty file")
def process_data(X, min_len, normalise=None):
"""
This is a function to process the data, i.e. convert dataframe to numpy array
:param X:
:param min_len:
:param normalise:
:return:
"""
tmp = []
for i in tqdm(range(len(X))):
_x = X.iloc[i, :].copy(deep=True)
# 1. find the maximum length of each dimension
all_len = [len(y) for y in _x]
max_len = max(all_len)
# 2. adjust the length of each dimension
_y = []
for y in _x:
# 2.1 fill missing values
if y.isnull().any():
y = y.interpolate(method='linear', limit_direction='both')
# 2.2. if length of each dimension is different, uniformly scale the shorter ones to the max length
if len(y) < max_len:
y = uniform_scaling(y, max_len)
_y.append(y)
_y = np.array(np.transpose(_y))
# 3. adjust the length of the series, chop of the longer series
_y = _y[:min_len, :]
# 4. normalise the series
if normalise == "standard":
scaler = StandardScaler().fit(_y)
_y = scaler.transform(_y)
if normalise == "minmax":
scaler = MinMaxScaler().fit(_y)
_y = scaler.transform(_y)
tmp.append(_y)
X = np.array(tmp)
return X
================================================
FILE: ts_classification_methods/tst_cls/src/main.py
================================================
"""
Written by George Zerveas
Modified by Ziyang Huang
If you use any part of the code in this repository, please consider citing the following paper:
George Zerveas et al. A Transformer-based Framework for Multivariate Time Series Representation Learning, in
Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '21), August 14--18, 2021
"""
from models.loss import NoFussCrossEntropyLoss, MaskedMSELoss
from dataprepare import *
from optimizers import get_optimizer
from models.loss import get_loss_module
from models.ts_transformer import model_factory
from datasets.datasplit import split_dataset
from datasets.data import data_factory, Normalizer
from utils import utils
from running import setup, pipeline_factory, validate, check_progress, NEG_METRICS
from options import Options
import numpy as np
from torch.utils.tensorboard import SummaryWriter
from torch import nn
from torch.utils.data import DataLoader
import torch
from tqdm import tqdm
import pandas as pd
import json
import pickle
import time
import sys
import os
from copy import deepcopy
import logging
logging.basicConfig(
format='%(asctime)s | %(levelname)s : %(message)s', level=logging.INFO)
logger = logging.getLogger(__name__)
logger.info("Loading packages ...")
# 3rd party packages
# Project modules
def main(config):
total_epoch_time = 0
total_eval_time = 0
total_start_time = time.time()
# Add file logging besides stdout
file_handler = logging.FileHandler(
os.path.join(config['output_dir'], 'output.log'))
logger.addHandler(file_handler)
logger.info('Running:\n{}\n'.format(
' '.join(sys.argv))) # command used to run
if config['seed'] is not None:
torch.manual_seed(config['seed'])
if config['multi_gpu']:
device_ids = [0, 1]
device = torch.device('cuda:{}'.format(config['gpu']) if (
torch.cuda.is_available() and config['gpu'] != '-1') else 'cpu')
logger.info("Using device: {}".format(device))
if device == 'cuda':
logger.info("Device index: {}".format(torch.cuda.current_device()))
# Build data
logger.info("Loading and preprocessing data ...")
data_class = data_factory[config['data_class']]
my_data = data_class(config['data_dir'], pattern=config['pattern'],
n_proc=config['n_proc'], limit_size=config['limit_size'], config=config)
feat_dim = my_data.feature_df.shape[1] # dimensionality of data features
validation_method = 'StratifiedKFold'
labels = my_data.labels_df.values.flatten()
# Split dataset
test_data = my_data
# will be converted to empty list in `split_dataset`, if also test_set_ratio == 0
test_indices = None
val_data = my_data
val_indices = []
# Note: currently a validation set must exist, either with `val_pattern` or `val_ratio`
# Using a `val_pattern` means that `val_ratio` == 0 and `test_ratio` == 0
# 5 fold
accus = []
times = []
end_epochs = []
for i in range(5):
fold_start_time = time.time()
train_indices, val_indices, test_indices = split_dataset(data_indices=my_data.all_IDs,
validation_method=validation_method,
n_splits=1,
validation_ratio=config['val_ratio'],
# used only if test_indices not explicitly specified
test_set_ratio=config['test_ratio'],
test_indices=test_indices,
random_seed=42,
labels=labels, ith=i)
logger.info('{} fold start training!'.format(i))
logger.info("{} samples may be used for training".format(
len(train_indices)))
logger.info(
"{} samples will be used for validation".format(len(val_indices)))
logger.info("{} samples will be used for testing".format(
len(test_indices)))
# Create model
logger.info("Creating model ...")
if config['task'] == 'pretrain_and_finetune':
model, classifier = model_factory(config, my_data, labels)
else:
model = model_factory(config, my_data)
if config['global_reg']:
weight_decay = config['l2_reg']
output_reg = None
else:
weight_decay = 0
output_reg = config['l2_reg']
optim_class = get_optimizer(config['optimizer'])
optimizer = optim_class(
model.parameters(), lr=config['lr'], weight_decay=weight_decay)
start_epoch = 0
lr_step = 0 # current step index of `lr_step`
lr = config['lr'] # current learning step
# Load model and optimizer state
if config['multi_gpu']:
model = nn.DataParallel(model, device_ids)
optimizer = nn.DataParallel(optimizer, device_ids)
if config['task'] == 'pretrain_and_finetune':
classifier = nn.DataParallel(classifier, device_ids)
model.to(device)
if config['task'] == 'pretrain_and_finetune':
classifier.to(device)
elif config['task'] == 'classification':
if config['load_root'] is not None:
model.load_state_dict(torch.load(os.path.join(
config['load_root'], config['source_dataset'], 'encoder_weights.pt'), device))
classifier = model
loss_module = MaskedMSELoss(reduction='none')
classification_module = NoFussCrossEntropyLoss(reduction='none')
if config['task'] == 'classification':
loss_module = classification_module
'''
if config['multi_gpu']:
loss_module = nn.DataParallel(loss_module, device_ids)
'''
# Initialize data generators
if config['task'] == 'pretrain_and_finetune':
dataset_class, collate_fn, runner_class, cls_data_class, cls_collate_fn, cls_runner_cls = pipeline_factory(
config, device)
else:
dataset_class, collate_fn, runner_class = pipeline_factory(
config, device)
cls_data_class, cls_collate_fn, cls_runner_cls = dataset_class, collate_fn, runner_class
train_df, val_df, test_df = fill_nan_and_normalize(
my_data.feature_df.loc[train_indices], val_data.feature_df.loc[val_indices], test_data.feature_df.loc[test_indices], train_indices, val_indices, test_indices)
test_dataset = dataset_class(
test_data, test_indices, feature_df=test_df)
test_loader = DataLoader(dataset=test_dataset,
batch_size=config['batch_size'],
shuffle=False,
num_workers=8,
pin_memory=True,
collate_fn=lambda x: cls_collate_fn(x, max_len=model.max_len))
val_dataset = dataset_class(val_data, val_indices, feature_df=val_df)
val_loader = DataLoader(dataset=val_dataset,
batch_size=config['batch_size'],
shuffle=False,
num_workers=8,
pin_memory=True,
collate_fn=lambda x: collate_fn(x, max_len=model.max_len))
# config['num_workers'],pin_memory=True
train_dataset = dataset_class(
my_data, train_indices, feature_df=train_df)
train_loader = DataLoader(dataset=train_dataset,
batch_size=config['batch_size'],
shuffle=True,
num_workers=8,
pin_memory=True,
collate_fn=lambda x: collate_fn(x, max_len=model.max_len))
trainer = runner_class(model, train_loader, device, loss_module, optimizer, l2_reg=output_reg,
print_interval=config['print_interval'], console=config['console'])
val_evaluator = runner_class(model, val_loader, device, loss_module,
print_interval=config['print_interval'], console=config['console'])
test_evaluator = runner_class(model, test_loader, device, loss_module,
print_interval=config['print_interval'], console=config['console'])
tensorboard_writer = SummaryWriter(config['tensorboard_dir'])
# initialize with +inf or -inf depending on key metric
best_value = 1e16 if config['key_metric'] in NEG_METRICS else -1e16
best_test = 1e16 if config['key_metric'] in NEG_METRICS else -1e16
best_metrics = {}
best_test_metrics = {}
logger.info('Starting training...')
stop_count = 0
increase_count = 0
last_loss = 1e16
val_loss = 1e16
best_epoch = 0
for epoch in range(start_epoch + 1, config["epochs"] + 1):
if stop_count == 50 or increase_count == 50:
print('model convergent at epoch {}, early stopping'.format(epoch))
break
epoch_start_time = time.time()
# dictionary of aggregate epoch metrics
aggr_metrics_train = trainer.train_epoch(epoch)
epoch_runtime = time.time() - epoch_start_time
if epoch % 100 == 0:
print("epoch : {}".format(epoch))
if config['task'] == 'pretrain_and_finetune':
aggr_metrics_val, best_metrics, best_value, condition = validate(val_evaluator, tensorboard_writer, config,
best_metrics, best_value, epoch)
if condition or epoch == 1:
best_epoch = epoch
best_state_dict = deepcopy(model.state_dict())
elif config['task'] == 'classification':
aggr_metrics_val, best_metrics, best_value, condition = validate(val_evaluator, tensorboard_writer, config,
best_metrics, best_value, epoch)
if condition or epoch == 1:
best_epoch = epoch
best_state_dict = deepcopy(model.state_dict())
_, best_test_metrics, best_test, _ = validate(test_evaluator, tensorboard_writer, config,
best_test_metrics, best_test, epoch)
val_loss = aggr_metrics_val['loss']
if abs(last_loss - val_loss) <= 1e-4:
stop_count += 1
else:
stop_count = 0
if val_loss > last_loss:
increase_count += 1
else:
increase_count = 0
last_loss = val_loss
# save encoder weights
if config['task'] == 'classification_transfer':
save_path = os.path.join(
config['weights_save_path'], config['dataset'])
if not os.path.exists(save_path):
os.makedirs(save_path)
for key, val in model.state_dict().items():
if key.startswith('output_layer'):
state_dict.pop(key)
torch.save(state_dict, os.path.join(
save_path, 'encoder_weights.pt'))
if config['task'] == 'pretrain_and_finetune':
classifier_optimizer = optim_class(
classifier.parameters(), lr=config['lr'], weight_decay=weight_decay)
if config['multi_gpu']:
classifier_optimizer = nn.DataParallel(
classifier_optimizer, device_ids)
finetune_train_dataset = cls_data_class(
my_data, train_indices, feature_df=train_df)
finetune_train_loader = DataLoader(dataset=finetune_train_dataset,
batch_size=config['batch_size'],
shuffle=True,
num_workers=8,
collate_fn=lambda x: cls_collate_fn(x, max_len=classifier.max_len))
test_dataset = cls_data_class(
test_data, test_indices, feature_df=test_df)
test_loader = DataLoader(dataset=test_dataset,
batch_size=config['batch_size'],
shuffle=False,
num_workers=8,
pin_memory=True,
collate_fn=lambda x: cls_collate_fn(x, max_len=classifier.max_len), drop_last=True)
val_dataset = cls_data_class(
val_data, val_indices, feature_df=val_df)
val_loader = DataLoader(dataset=val_dataset,
batch_size=config['batch_size'],
shuffle=False,
num_workers=8,
pin_memory=True,
collate_fn=lambda x: cls_collate_fn(x, max_len=classifier.max_len), drop_last=True)
val_evaluator = cls_runner_cls(classifier, val_loader, device, classification_module,
print_interval=config['print_interval'], console=config['console'])
test_evaluator = cls_runner_cls(classifier, test_loader, device, classification_module,
print_interval=config['print_interval'], console=config['console'])
classifier_trainer = cls_runner_cls(classifier, finetune_train_loader, device, classification_module, classifier_optimizer, l2_reg=output_reg,
print_interval=config['print_interval'], console=config['console'])
state_dict = deepcopy(best_state_dict)
for key, val in model.state_dict().items():
if key.startswith('output_layer'):
state_dict.pop(key)
#classifier.module.load_state_dict(state_dict, strict=False)
for epoch in range(start_epoch + 1, 101):
epoch_start_time = time.time()
aggr_metrics_train = classifier_trainer.train_epoch(
epoch) # dictionary of aggregate epoch metrics
epoch_runtime = time.time() - epoch_start_time
aggr_metrics_val, best_metrics, best_value, condition = validate(val_evaluator, tensorboard_writer, config,
best_metrics, best_value, epoch)
if condition or epoch == 1:
_, best_test_metrics, best_test, _ = validate(test_evaluator, tensorboard_writer, config,
best_test_metrics, best_test, epoch)
logger.info('Best {} was {}. Other metrics: {}'.format(
config['key_metric'], best_value, best_metrics))
logger.info('{} fold training Done!'.format(i))
fold_end_time = time.time()
accus.append(best_test_metrics['accuracy'].cpu().numpy())
times.append(fold_end_time-fold_start_time)
end_epochs.append(best_epoch)
# TODO 已经有了所有的metric,参照tsmutil将所有的插入表格并开始训练
accus = np.array(accus)
acc_mean = accus.mean()
acc_std = accus.std()
time_mean = np.array(times).mean()
epoch_mean = np.array(end_epochs).mean()
if config['task'] == 'pretrain_and_finetune':
save_path = './tst_results.csv'
if os.path.exists(save_path):
result_form = pd.read_csv(save_path)
else:
result_form = pd.DataFrame(columns=['target', 'accuracy', 'std'])
result_form = result_form.append(
{'target': config['dataset'], 'accuracy': '%.4f' % acc_mean, 'std': '%.4f' % acc_std}, ignore_index=True)
result_form = result_form.iloc[:, -3:]
result_form.to_csv(save_path)
elif config['task'] == 'classification':
save_path = './non_linear_classification_tst_results.csv'
if os.path.exists(save_path):
result_form = pd.read_csv(save_path)
else:
result_form = pd.DataFrame(columns=[
'dataset_name', 'test_accuracy', 'test_std', 'train_time', 'end_val_epoch', 'seeds'])
result_form = result_form.append({'dataset_name': config['dataset'], 'test_accuracy': '%.4f' % acc_mean, 'test_std': '%.4f' % acc_std, 'train_time': '%.4f' % time_mean, 'end_val_epoch': '%.2f' % epoch_mean,
'seeds': '%d' % 42}, ignore_index=True)
result_form = result_form.iloc[:, -6:]
result_form.to_csv(save_path)
return best_value
if __name__ == '__main__':
# set seed
SEED = 42
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.cuda.manual_seed_all(SEED)
args = Options().parse() # `argsparse` object
config = setup(args) # configuration dictionary
main(config)
================================================
FILE: ts_classification_methods/tst_cls/src/models/__init__.py
================================================
================================================
FILE: ts_classification_methods/tst_cls/src/models/loss.py
================================================
import torch
import torch.nn as nn
from torch.nn import functional as F
def get_loss_module(config):
task = config['task']
if (task == "imputation") or (task == "transduction"):
return MaskedMSELoss(reduction='none') # outputs loss for each batch element
if task == "classification":
return NoFussCrossEntropyLoss(reduction='none') # outputs loss for each batch sample
if task == "regression":
return nn.MSELoss(reduction='none') # outputs loss for each batch sample
else:
raise ValueError("Loss module for task '{}' does not exist".format(task))
def l2_reg_loss(model):
"""Returns the squared L2 norm of output layer of given model"""
for name, param in model.module.named_parameters():
if name == 'output_layer.weight':
return torch.sum(torch.square(param))
class NoFussCrossEntropyLoss(nn.CrossEntropyLoss):
"""
pytorch's CrossEntropyLoss is fussy: 1) needs Long (int64) targets only, and 2) only 1D.
This function satisfies these requirements
"""
def forward(self, inp, target):
return F.cross_entropy(inp, target.long().squeeze(), weight=self.weight,
ignore_index=self.ignore_index, reduction=self.reduction)
class MaskedMSELoss(nn.Module):
""" Masked MSE Loss
"""
def __init__(self, reduction: str = 'mean'):
super().__init__()
self.reduction = reduction
self.mse_loss = nn.MSELoss(reduction=self.reduction)
def forward(self,
y_pred: torch.Tensor, y_true: torch.Tensor, mask: torch.BoolTensor) -> torch.Tensor:
"""Compute the loss between a target value and a prediction.
Args:
y_pred: Estimated values
y_true: Target values
mask: boolean tensor with 0s at places where values should be ignored and 1s where they should be considered
Returns
-------
if reduction == 'none':
(num_active,) Loss for each active batch element as a tensor with gradient attached.
if reduction == 'mean':
scalar mean loss over batch as a tensor with gradient attached.
"""
# for this particular loss, one may also elementwise multiply y_pred and y_true with the inverted mask
masked_pred = torch.masked_select(y_pred, mask)
masked_true = torch.masked_select(y_true, mask)
return self.mse_loss(masked_pred, masked_true)
================================================
FILE: ts_classification_methods/tst_cls/src/models/ts_transformer.py
================================================
from typing import Optional, Any
import math
import numpy as np
import torch
from torch import nn, Tensor
from torch.nn import functional as F
from torch.nn.modules import MultiheadAttention, Linear, Dropout, BatchNorm1d, TransformerEncoderLayer
def model_factory(config, data, labels=None):
task = config['task']
feat_dim = data.feature_df.shape[1] # dimensionality of data features
# data windowing is used when samples don't have a predefined length or the length is too long
max_seq_len = config['data_window_len'] if config['data_window_len'] is not None else config['max_seq_len']
if max_seq_len is None:
try:
max_seq_len = data.max_seq_len
except AttributeError as x:
print("Data class does not define a maximum sequence length, so it must be defined with the script argument `max_seq_len`")
raise x
if task == "pretrain_and_finetune":
if labels is not None:
num_labels = len(np.unique(labels))
print("numlabel is {}".format(num_labels))
encoder = TSTransformerEncoder(feat_dim, max_seq_len, config['d_model'], config['num_heads'],
config['num_layers'], config['dim_feedforward'], dropout=config['dropout'],
pos_encoding=config['pos_encoding'], activation=config['activation'],
norm=config['normalization_layer'], freeze=config['freeze'])
classifier = TSTransformerEncoderClassiregressor(feat_dim, max_seq_len, config['d_model'],
config['num_heads'],
config['num_layers'], config['dim_feedforward'],
num_classes=num_labels,
dropout=config['dropout'], pos_encoding=config['pos_encoding'],
activation=config['activation'],
norm=config['normalization_layer'], freeze=config['freeze'])
return encoder, classifier
if (task == "imputation") or (task == "transduction"):
if config['model'] == 'LINEAR':
return DummyTSTransformerEncoder(feat_dim, max_seq_len, config['d_model'], config['num_heads'],
config['num_layers'], config['dim_feedforward'], dropout=config['dropout'],
pos_encoding=config['pos_encoding'], activation=config['activation'],
norm=config['normalization_layer'], freeze=config['freeze'])
elif config['model'] == 'transformer':
return TSTransformerEncoder(feat_dim, max_seq_len, config['d_model'], config['num_heads'],
config['num_layers'], config['dim_feedforward'], dropout=config['dropout'],
pos_encoding=config['pos_encoding'], activation=config['activation'],
norm=config['normalization_layer'], freeze=config['freeze'])
if (task == "classification") or (task == "regression"):
# dimensionality of labels
num_labels = len(
data.class_names) if task == "classification" else data.labels_df.shape[1]
if config['model'] == 'LINEAR':
return DummyTSTransformerEncoderClassiregressor(feat_dim, max_seq_len, config['d_model'],
config['num_heads'],
config['num_layers'], config['dim_feedforward'],
num_classes=num_labels,
dropout=config['dropout'], pos_encoding=config['pos_encoding'],
activation=config['activation'],
norm=config['normalization_layer'], freeze=config['freeze'])
elif config['model'] == 'transformer':
return TSTransformerEncoderClassiregressor(feat_dim, max_seq_len, config['d_model'],
config['num_heads'],
config['num_layers'], config['dim_feedforward'],
num_classes=num_labels,
dropout=config['dropout'], pos_encoding=config['pos_encoding'],
activation=config['activation'],
norm=config['normalization_layer'], freeze=config['freeze'], nonlinear=config['nonlinear'])
else:
raise ValueError(
"Model class for task '{}' does not exist".format(task))
def _get_activation_fn(activation):
if activation == "relu":
return F.relu
elif activation == "gelu":
return F.gelu
raise ValueError(
"activation should be relu/gelu, not {}".format(activation))
# From https://github.com/pytorch/examples/blob/master/word_language_model/model.py
class FixedPositionalEncoding(nn.Module):
r"""Inject some information about the relative or absolute position of the tokens
in the sequence. The positional encodings have the same dimension as
the embeddings, so that the two can be summed. Here, we use sine and cosine
functions of different frequencies.
.. math::
\text{PosEncoder}(pos, 2i) = sin(pos/10000^(2i/d_model))
\text{PosEncoder}(pos, 2i+1) = cos(pos/10000^(2i/d_model))
\text{where pos is the word position and i is the embed idx)
Args:
d_model: the embed dim (required).
dropout: the dropout value (default=0.1).
max_len: the max. length of the incoming sequence (default=1024).
"""
def __init__(self, d_model, dropout=0.1, max_len=1024, scale_factor=1.0):
super(FixedPositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pe = torch.zeros(max_len, d_model) # positional encoding
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(
0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = scale_factor * pe.unsqueeze(0).transpose(0, 1)
# this stores the variable in the state_dict (used for non-trainable variables)
self.register_buffer('pe', pe)
def forward(self, x):
r"""Inputs of forward function
Args:
x: the sequence fed to the positional encoder model (required).
Shape:
x: [sequence length, batch size, embed dim]
output: [sequence length, batch size, embed dim]
"""
x = x + self.pe[:x.size(0), :]
return self.dropout(x)
class LearnablePositionalEncoding(nn.Module):
def __init__(self, d_model, dropout=0.1, max_len=1024):
super(LearnablePositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
# Each position gets its own embedding
# Since indices are always 0 ... max_len, we don't have to do a look-up
# requires_grad automatically set to True
self.pe = nn.Parameter(torch.empty(max_len, 1, d_model))
nn.init.uniform_(self.pe, -0.02, 0.02)
def forward(self, x):
r"""Inputs of forward function
Args:
x: the sequence fed to the positional encoder model (required).
Shape:
x: [sequence length, batch size, embed dim]
output: [sequence length, batch size, embed dim]
"""
x = x + self.pe[:x.size(0), :]
return self.dropout(x)
def get_pos_encoder(pos_encoding):
if pos_encoding == "learnable":
return LearnablePositionalEncoding
elif pos_encoding == "fixed":
return FixedPositionalEncoding
raise NotImplementedError(
"pos_encoding should be 'learnable'/'fixed', not '{}'".format(pos_encoding))
class TransformerBatchNormEncoderLayer(nn.modules.Module):
r"""This transformer encoder layer block is made up of self-attn and feedforward network.
It differs from TransformerEncoderLayer in torch/nn/modules/transformer.py in that it replaces LayerNorm
with BatchNorm.
Args:
d_model: the number of expected features in the input (required).
nhead: the number of heads in the multiheadattention models (required).
dim_feedforward: the dimension of the feedforward network model (default=2048).
dropout: the dropout value (default=0.1).
activation: the activation function of intermediate layer, relu or gelu (default=relu).
"""
def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1, activation="relu"):
super(TransformerBatchNormEncoderLayer, self).__init__()
self.self_attn = MultiheadAttention(d_model, nhead, dropout=dropout)
# Implementation of Feedforward model
self.linear1 = Linear(d_model, dim_feedforward)
self.dropout = Dropout(dropout)
self.linear2 = Linear(dim_feedforward, d_model)
# normalizes each feature across batch samples and time steps
self.norm1 = BatchNorm1d(d_model, eps=1e-5)
self.norm2 = BatchNorm1d(d_model, eps=1e-5)
self.dropout1 = Dropout(dropout)
self.dropout2 = Dropout(dropout)
self.activation = _get_activation_fn(activation)
def __setstate__(self, state):
if 'activation' not in state:
state['activation'] = F.relu
super(TransformerBatchNormEncoderLayer, self).__setstate__(state)
def forward(self, src: Tensor, src_mask: Optional[Tensor] = None,
src_key_padding_mask: Optional[Tensor] = None) -> Tensor:
r"""Pass the input through the encoder layer.
Args:
src: the sequence to the encoder layer (required).
src_mask: the mask for the src sequence (optional).
src_key_padding_mask: the mask for the src keys per batch (optional).
Shape:
see the docs in Transformer class.
"""
src2 = self.self_attn(src, src, src, attn_mask=src_mask,
key_padding_mask=src_key_padding_mask)[0]
src = src + self.dropout1(src2) # (seq_len, batch_size, d_model)
src = src.permute(1, 2, 0) # (batch_size, d_model, seq_len)
# src = src.reshape([src.shape[0], -1]) # (batch_size, seq_length * d_model)
src = self.norm1(src)
src = src.permute(2, 0, 1) # restore (seq_len, batch_size, d_model)
src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
src = src + self.dropout2(src2) # (seq_len, batch_size, d_model)
src = src.permute(1, 2, 0) # (batch_size, d_model, seq_len)
src = self.norm2(src)
src = src.permute(2, 0, 1) # restore (seq_len, batch_size, d_model)
return src
class TSTransformerEncoder(nn.Module):
def __init__(self, feat_dim, max_len, d_model, n_heads, num_layers, dim_feedforward, dropout=0.1,
pos_encoding='fixed', activation='gelu', norm='BatchNorm', freeze=False):
super(TSTransformerEncoder, self).__init__()
self.max_len = max_len
self.d_model = d_model
self.n_heads = n_heads
self.project_inp = nn.Linear(feat_dim, d_model)
self.pos_enc = get_pos_encoder(pos_encoding)(
d_model, dropout=dropout*(1.0 - freeze), max_len=max_len)
if norm == 'LayerNorm':
encoder_layer = TransformerEncoderLayer(
d_model, self.n_heads, dim_feedforward, dropout*(1.0 - freeze), activation=activation)
else:
encoder_layer = TransformerBatchNormEncoderLayer(
d_model, self.n_heads, dim_feedforward, dropout*(1.0 - freeze), activation=activation)
self.transformer_encoder = nn.TransformerEncoder(
encoder_layer, num_layers)
self.output_layer = nn.Linear(d_model, feat_dim)
self.act = _get_activation_fn(activation)
self.dropout1 = nn.Dropout(dropout)
self.feat_dim = feat_dim
def forward(self, X, padding_masks):
"""
Args:
X: (batch_size, seq_length, feat_dim) torch tensor of masked features (input)
padding_masks: (batch_size, seq_length) boolean tensor, 1 means keep vector at this position, 0 means padding
Returns:
output: (batch_size, seq_length, feat_dim)
"""
# permute because pytorch convention for transformers is [seq_length, batch_size, feat_dim]. padding_masks [batch_size, feat_dim]
inp = X.permute(1, 0, 2)
inp = self.project_inp(inp) * math.sqrt(
self.d_model) # [seq_length, batch_size, d_model] project input vectors to d_model dimensional space
inp = self.pos_enc(inp) # add positional encoding
# NOTE: logic for padding masks is reversed to comply with definition in MultiHeadAttention, TransformerEncoderLayer
# (seq_length, batch_size, d_model)
output = self.transformer_encoder(
inp, src_key_padding_mask=~padding_masks)
# the output transformer encoder/decoder embeddings don't include non-linearity
output = self.act(output)
output = output.permute(1, 0, 2) # (batch_size, seq_length, d_model)
output = self.dropout1(output)
# Most probably defining a Linear(d_model,feat_dim) vectorizes the operation over (seq_length, batch_size).
# (batch_size, seq_length, feat_dim)
output = self.output_layer(output)
return output
class TSTransformerEncoderClassiregressor(nn.Module):
"""
Simplest classifier/regressor. Can be either regressor or classifier because the output does not include
softmax. Concatenates final layer embeddings and uses 0s to ignore padding embeddings in final output layer.
"""
def __init__(self, feat_dim, max_len, d_model, n_heads, num_layers, dim_feedforward, num_classes,
dropout=0.1, pos_encoding='fixed', activation='gelu', norm='BatchNorm', freeze=False, nonlinear=True):
super(TSTransformerEncoderClassiregressor, self).__init__()
self.max_len = max_len
self.d_model = d_model
self.n_heads = n_heads
self.project_inp = nn.Linear(feat_dim, d_model)
self.pos_enc = get_pos_encoder(pos_encoding)(
d_model, dropout=dropout*(1.0 - freeze), max_len=max_len)
if norm == 'LayerNorm':
encoder_layer = TransformerEncoderLayer(
d_model, self.n_heads, dim_feedforward, dropout*(1.0 - freeze), activation=activation)
else:
encoder_layer = TransformerBatchNormEncoderLayer(
d_model, self.n_heads, dim_feedforward, dropout*(1.0 - freeze), activation=activation)
self.transformer_encoder = nn.TransformerEncoder(
encoder_layer, num_layers)
self.act = _get_activation_fn(activation)
self.dropout1 = nn.Dropout(dropout)
self.feat_dim = feat_dim
self.num_classes = num_classes
self.output_layer = self.build_output_module(
d_model, max_len, num_classes)
def build_output_module(self, d_model, max_len, num_classes, nonlinear=False):
if nonlinear:
net = nn.Sequential(
nn.Linear(d_model * max_len, d_model * max_len),
nn.BatchNorm1d(d_model * max_len),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(d_model * max_len, num_classes),
nn.Softmax(dim=1)
)
return net
else:
output_layer = nn.Linear(d_model * max_len, num_classes)
# no softmax (or log softmax), because CrossEntropyLoss does this internally. If probabilities are needed,
# add F.log_softmax and use NLLoss
return output_layer
def forward(self, X, padding_masks):
"""
Args:
X: (batch_size, seq_length, feat_dim) torch tensor of masked features (input)
padding_masks: (batch_size, seq_length) boolean tensor, 1 means keep vector at this position, 0 means padding
Returns:
output: (batch_size, num_classes)
"""
# permute because pytorch convention for transformers is [seq_length, batch_size, feat_dim]. padding_masks [batch_size, feat_dim]
inp = X.permute(1, 0, 2)
inp = self.project_inp(inp) * math.sqrt(
self.d_model) # [seq_length, batch_size, d_model] project input vectors to d_model dimensional space
inp = self.pos_enc(inp) # add positional encoding
# NOTE: logic for padding masks is reversed to comply with definition in MultiHeadAttention, TransformerEncoderLayer
# (seq_length, batch_size, d_model)
output = self.transformer_encoder(
inp, src_key_padding_mask=~padding_masks)
# the output transformer encoder/decoder embeddings don't include non-linearity
output = self.act(output)
output = output.permute(1, 0, 2) # (batch_size, seq_length, d_model)
output = self.dropout1(output)
# Output
# zero-out padding embeddings
output = output * padding_masks.unsqueeze(-1)
# (batch_size, seq_length * d_model)
output = output.reshape(output.shape[0], -1)
output = self.output_layer(output) # (batch_size, num_classes)
return output
================================================
FILE: ts_classification_methods/tst_cls/src/optimizers.py
================================================
import math
import torch
from torch.optim.optimizer import Optimizer
def get_optimizer(name):
if name == "Adam":
return torch.optim.Adam
elif name == "RAdam":
return RAdam
# from https://github.com/LiyuanLucasLiu/RAdam/blob/master/radam/radam.py
class RAdam(Optimizer):
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0, degenerated_to_sgd=True):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
self.degenerated_to_sgd = degenerated_to_sgd
if isinstance(params, (list, tuple)) and len(params) > 0 and isinstance(params[0], dict):
for param in params:
if 'betas' in param and (param['betas'][0] != betas[0] or param['betas'][1] != betas[1]):
param['buffer'] = [[None, None, None] for _ in range(10)]
defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay,
buffer=[[None, None, None] for _ in range(10)])
super(RAdam, self).__init__(params, defaults)
def __setstate__(self, state):
super(RAdam, self).__setstate__(state)
def step(self, closure=None):
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data.float()
if grad.is_sparse:
raise RuntimeError('RAdam does not support sparse gradients')
p_data_fp32 = p.data.float()
state = self.state[p]
if len(state) == 0:
state['step'] = 0
state['exp_avg'] = torch.zeros_like(p_data_fp32)
state['exp_avg_sq'] = torch.zeros_like(p_data_fp32)
else:
state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32)
state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_data_fp32)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
exp_avg.mul_(beta1).add_(1 - beta1, grad)
state['step'] += 1
buffered = group['buffer'][int(state['step'] % 10)]
if state['step'] == buffered[0]:
N_sma, step_size = buffered[1], buffered[2]
else:
buffered[0] = state['step']
beta2_t = beta2 ** state['step']
N_sma_max = 2 / (1 - beta2) - 1
N_sma = N_sma_max - 2 * state['step'] * beta2_t / (1 - beta2_t)
buffered[1] = N_sma
# more conservative since it's an approximated value
if N_sma >= 5:
step_size = math.sqrt(
(1 - beta2_t) * (N_sma - 4) / (N_sma_max - 4) * (N_sma - 2) / N_sma * N_sma_max / (
N_sma_max - 2)) / (1 - beta1 ** state['step'])
elif self.degenerated_to_sgd:
step_size = 1.0 / (1 - beta1 ** state['step'])
else:
step_size = -1
buffered[2] = step_size
# more conservative since it's an approximated value
if N_sma >= 5:
if group['weight_decay'] != 0:
p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32)
denom = exp_avg_sq.sqrt().add_(group['eps'])
p_data_fp32.addcdiv_(-step_size * group['lr'], exp_avg, denom)
p.data.copy_(p_data_fp32)
elif step_size > 0:
if group['weight_decay'] != 0:
p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32)
p_data_fp32.add_(-step_size * group['lr'], exp_avg)
p.data.copy_(p_data_fp32)
return loss
class PlainRAdam(Optimizer):
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0, degenerated_to_sgd=True):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
self.degenerated_to_sgd = degenerated_to_sgd
defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay)
super(PlainRAdam, self).__init__(params, defaults)
def __setstate__(self, state):
super(PlainRAdam, self).__setstate__(state)
def step(self, closure=None):
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data.float()
if grad.is_sparse:
raise RuntimeError('RAdam does not support sparse gradients')
p_data_fp32 = p.data.float()
state = self.state[p]
if len(state) == 0:
state['step'] = 0
state['exp_avg'] = torch.zeros_like(p_data_fp32)
state['exp_avg_sq'] = torch.zeros_like(p_data_fp32)
else:
state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32)
state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_data_fp32)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
exp_avg.mul_(beta1).add_(1 - beta1, grad)
state['step'] += 1
beta2_t = beta2 ** state['step']
N_sma_max = 2 / (1 - beta2) - 1
N_sma = N_sma_max - 2 * state['step'] * beta2_t / (1 - beta2_t)
# more conservative since it's an approximated value
if N_sma >= 5:
if group['weight_decay'] != 0:
p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32)
step_size = group['lr'] * math.sqrt(
(1 - beta2_t) * (N_sma - 4) / (N_sma_max - 4) * (N_sma - 2) / N_sma * N_sma_max / (
N_sma_max - 2)) / (1 - beta1 ** state['step'])
denom = exp_avg_sq.sqrt().add_(group['eps'])
p_data_fp32.addcdiv_(-step_size, exp_avg, denom)
p.data.copy_(p_data_fp32)
elif self.degenerated_to_sgd:
if group['weight_decay'] != 0:
p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32)
step_size = group['lr'] / (1 - beta1 ** state['step'])
p_data_fp32.add_(-step_size, exp_avg)
p.data.copy_(p_data_fp32)
return loss
class AdamW(Optimizer):
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0, warmup=0):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = dict(lr=lr, betas=betas, eps=eps,
weight_decay=weight_decay, warmup=warmup)
super(AdamW, self).__init__(params, defaults)
def __setstate__(self, state):
super(AdamW, self).__setstate__(state)
def step(self, closure=None):
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data.float()
if grad.is_sparse:
raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')
p_data_fp32 = p.data.float()
state = self.state[p]
if len(state) == 0:
state['step'] = 0
state['exp_avg'] = torch.zeros_like(p_data_fp32)
state['exp_avg_sq'] = torch.zeros_like(p_data_fp32)
else:
state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32)
state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_data_fp32)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
exp_avg.mul_(beta1).add_(1 - beta1, grad)
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
if group['warmup'] > state['step']:
scheduled_lr = 1e-8 + state['step'] * group['lr'] / group['warmup']
else:
scheduled_lr = group['lr']
step_size = scheduled_lr * math.sqrt(bias_correction2) / bias_correction1
if group['weight_decay'] != 0:
p_data_fp32.add_(-group['weight_decay'] * scheduled_lr, p_data_fp32)
p_data_fp32.addcdiv_(-step_size, exp_avg, denom)
p.data.copy_(p_data_fp32)
return loss
================================================
FILE: ts_classification_methods/tst_cls/src/options.py
================================================
import argparse
class Options(object):
def __init__(self):
# Handle command line arguments
self.parser = argparse.ArgumentParser(
description='Run a complete training pipeline. Optionally, a JSON configuration file can be used, to overwrite command-line arguments.')
## Run from config file
self.parser.add_argument('--config', dest='config_filepath',
help='Configuration .json file (optional). Overwrites existing command-line args!')
## Run from command-line arguments
# I/O
self.parser.add_argument('--output_dir', default='./results',
help='Root output directory. Must exist. Time-stamped directories will be created inside.')
self.parser.add_argument('--data_dir', default='./data',
help='Data directory')
self.parser.add_argument('--load_model',
help='Path to pre-trained model.')
self.parser.add_argument('--resume', action='store_true',
help='If set, will load `starting_epoch` and state of optimizer, besides model weights.')
self.parser.add_argument('--change_output', action='store_true',
help='Whether the loaded model will be fine-tuned on a different task (necessitating a different output layer)')
self.parser.add_argument('--save_all', action='store_true',
help='If set, will save model weights (and optimizer state) for every epoch; otherwise just latest')
self.parser.add_argument('--name', dest='experiment_name', default='',
help='A string identifier/name for the experiment to be run - it will be appended to the output directory name, before the timestamp')
self.parser.add_argument('--comment', type=str, default='', help='A comment/description of the experiment')
self.parser.add_argument('--no_timestamp', action='store_true',
help='If set, a timestamp will not be appended to the output directory name')
self.parser.add_argument('--records_file', default='./records.xls',
help='Excel file keeping all records of experiments')
# System
self.parser.add_argument('--console', action='store_true',
help="Optimize printout for console output; otherwise for file")
self.parser.add_argument('--print_interval', type=int, default=100,
help='Print batch info every this many batches')
self.parser.add_argument('--gpu', type=str, default='0',
help='GPU index, -1 for CPU')
self.parser.add_argument('--n_proc', type=int, default=-1,
help='Number of processes for data loading/preprocessing. By default, equals num. of available cores.')
self.parser.add_argument('--num_workers', type=int, default=5,
help='dataloader threads. 0 for single-thread.')
self.parser.add_argument('--seed',
help='Seed used for splitting sets. None by default, set to an integer for reproducibility')
# Dataset
self.parser.add_argument('--limit_size', type=float, default=None,
help="Limit dataset to specified smaller random sample, e.g. for rapid debugging purposes. "
"If in [0,1], it will be interpreted as a proportion of the dataset, "
"otherwise as an integer absolute number of samples")
self.parser.add_argument('--test_only', choices={'testset', 'fold_transduction'},
help='If set, no training will take place; instead, trained model will be loaded and evaluated on test set')
self.parser.add_argument('--data_class', type=str, default='tsra',
help="Which type of data should be processed.")
self.parser.add_argument('--labels', type=str,
help="In case a dataset contains several labels (multi-task), "
"which type of labels should be used in regression or classification, i.e. name of column(s).")
self.parser.add_argument('--test_from',
help='If given, will read test IDs from specified text file containing sample IDs one in each row')
self.parser.add_argument('--test_ratio', type=float, default=0,
help="Set aside this proportion of the dataset as a test set")
self.parser.add_argument('--val_ratio', type=float, default=0.2,
help="Proportion of the dataset to be used as a validation set")
self.parser.add_argument('--pattern', type=str,
help='Regex pattern used to select files contained in `data_dir`. If None, all data will be used.')
self.parser.add_argument('--val_pattern', type=str,
help="""Regex pattern used to select files contained in `data_dir` exclusively for the validation set.
If None, a positive `val_ratio` will be used to reserve part of the common data set.""")
self.parser.add_argument('--test_pattern', type=str,
help="""Regex pattern used to select files contained in `data_dir` exclusively for the test set.
If None, `test_ratio`, if specified, will be used to reserve part of the common data set.""")
self.parser.add_argument('--normalization',
choices={'standardization', 'minmax', 'per_sample_std', 'per_sample_minmax'},
default='standardization',
help='If specified, will apply normalization on the input features of a dataset.')
self.parser.add_argument('--norm_from',
help="""If given, will read normalization values (e.g. mean, std, min, max) from specified pickle file.
The columns correspond to features, rows correspond to mean, std or min, max.""")
self.parser.add_argument('--subsample_factor', type=int,
help='Sub-sampling factor used for long sequences: keep every kth sample')
# Training process
self.parser.add_argument('--task', choices={"imputation", "transduction", "classification", "regression", "pretrain_and_finetune"},
default="imputation",
help=("Training objective/task: imputation of masked values,\n"
" transduction of features to other features,\n"
" classification of entire time series,\n"
" regression of scalar(s) for entire time series"))
self.parser.add_argument('--masking_ratio', type=float, default=0.15,
help='Imputation: mask this proportion of each variable')
self.parser.add_argument('--mean_mask_length', type=float, default=3,
help="Imputation: the desired mean length of masked segments. Used only when `mask_distribution` is 'geometric'.")
self.parser.add_argument('--mask_mode', choices={'separate', 'concurrent'}, default='separate',
help=("Imputation: whether each variable should be masked separately "
"or all variables at a certain positions should be masked concurrently"))
self.parser.add_argument('--mask_distribution', choices={'geometric', 'bernoulli'}, default='geometric',
help=("Imputation: whether each mask sequence element is sampled independently at random"
"or whether sampling follows a markov chain (stateful), resulting in "
"geometric distributions of masked squences of a desired mean_mask_length"))
self.parser.add_argument('--exclude_feats', type=str, default=None,
help='Imputation: Comma separated string of indices corresponding to features to be excluded from masking')
self.parser.add_argument('--mask_feats', type=str, default='0, 1',
help='Transduction: Comma separated string of indices corresponding to features to be masked')
self.parser.add_argument('--start_hint', type=float, default=0.0,
help='Transduction: proportion at the beginning of time series which will not be masked')
self.parser.add_argument('--end_hint', type=float, default=0.0,
help='Transduction: proportion at the end of time series which will not be masked')
self.parser.add_argument('--harden', action='store_true',
help='Makes training objective progressively harder, by masking more of the input')
self.parser.add_argument('--epochs', type=int, default=400,
help='Number of training epochs')
self.parser.add_argument('--val_interval', type=int, default=1,
help='Evaluate on validation set every this many epochs. Must be >= 1.')
self.parser.add_argument('--optimizer', choices={"Adam", "RAdam"}, default="RAdam", help="Optimizer")
self.parser.add_argument('--lr', type=float, default=1e-3,
help='learning rate (default holds for batch size 64)')
self.parser.add_argument('--lr_step', type=str, default='1000000',
help='Comma separated string of epochs when to reduce learning rate by a factor of 10.'
' The default is a large value, meaning that the learning rate will not change.')
self.parser.add_argument('--lr_factor', type=str, default='0.1',
help=("Comma separated string of multiplicative factors to be applied to lr "
"at corresponding steps specified in `lr_step`. If a single value is provided, "
"it will be replicated to match the number of steps in `lr_step`."))
self.parser.add_argument('--batch_size', type=int, default=64,
help='Training batch size')
self.parser.add_argument('--l2_reg', type=float, default=0,
help='L2 weight regularization parameter')
self.parser.add_argument('--global_reg', action='store_true',
help='If set, L2 regularization will be applied to all weights instead of only the output layer')
self.parser.add_argument('--key_metric', choices={'loss', 'accuracy', 'precision'}, default='loss',
help='Metric used for defining best epoch')
self.parser.add_argument('--freeze', action='store_true',
help='If set, freezes all layer parameters except for the output layer. Also removes dropout except before the output layer')
# Model
self.parser.add_argument('--model', choices={"transformer", "LINEAR"}, default="transformer",
help="Model class")
self.parser.add_argument('--max_seq_len', type=int,
help="""Maximum input sequence length. Determines size of transformer layers.
If not provided, then the value defined inside the data class will be used.""")
self.parser.add_argument('--data_window_len', type=int,
help="""Used instead of the `max_seq_len`, when the data samples must be
segmented into windows. Determines maximum input sequence length
(size of transformer layers).""")
self.parser.add_argument('--d_model', type=int, default=64,
help='Internal dimension of transformer embeddings')
self.parser.add_argument('--dim_feedforward', type=int, default=256,
help='Dimension of dense feedforward part of transformer layer')
self.parser.add_argument('--num_heads', type=int, default=8,
help='Number of multi-headed attention heads')
self.parser.add_argument('--num_layers', type=int, default=3,
help='Number of transformer encoder layers (blocks)')
self.parser.add_argument('--dropout', type=float, default=0.1,
help='Dropout applied to most transformer encoder layers')
self.parser.add_argument('--pos_encoding', choices={'fixed', 'learnable'}, default='learnable',
help='Internal dimension of transformer embeddings')
self.parser.add_argument('--activation', choices={'relu', 'gelu'}, default='gelu',
help='Activation to be used in transformer encoder')
self.parser.add_argument('--normalization_layer', choices={'BatchNorm', 'LayerNorm'}, default='BatchNorm',
help='Normalization layer to be used internally in transformer encoder')
# my arg, for k-fold
self.parser.add_argument('--ith', type=int, help='the ith training scripts')
self.parser.add_argument('--dataset', type=str, help='target dataset name')
# transfer learning
self.parser.add_argument('--weights_save_path', type=str, help='encoder weights saving path')
self.parser.add_argument('--load_root', default=None, type=str, help='load root')
self.parser.add_argument('--source_dataset', type=str, help='transfer source dataset')
self.parser.add_argument('--multi_gpu', type=str)
# for classification
self.parser.add_argument('--nonlinear', type=bool, default=True, help='use linear or non-linear classifier')
def parse(self):
args = self.parser.parse_args()
args.lr_step = [int(i) for i in args.lr_step.split(',')]
args.lr_factor = [float(i) for i in args.lr_factor.split(',')]
if (len(args.lr_step) > 1) and (len(args.lr_factor) == 1):
args.lr_factor = len(args.lr_step) * args.lr_factor # replicate
assert len(args.lr_step) == len(
args.lr_factor), "You must specify as many values in `lr_step` as in `lr_factors`"
if args.exclude_feats is not None:
args.exclude_feats = [int(i) for i in args.exclude_feats.split(',')]
args.mask_feats = [int(i) for i in args.mask_feats.split(',')]
if args.val_pattern is not None:
args.val_ratio = 0
args.test_ratio = 0
return args
================================================
FILE: ts_classification_methods/tst_cls/src/running.py
================================================
import logging
import sys
import os
import traceback
import json
from datetime import datetime
import string
import random
from collections import OrderedDict
import time
import pickle
from functools import partial
import ipdb
import torch
from torch.utils.data import DataLoader
import numpy as np
import sklearn
from utils import utils, analysis
from models.loss import l2_reg_loss
from datasets.dataset import ImputationDataset, TransductionDataset, ClassiregressionDataset, collate_unsuperv, collate_superv
logger = logging.getLogger('__main__')
NEG_METRICS = {'loss'} # metrics for which "better" is less
val_times = {"total_time": 0, "count": 0}
def pipeline_factory(config, device):
"""For the task specified in the configuration returns the corresponding combination of
Dataset class, collate function and Runner class."""
task = config['task']
if task == "pretrain_and_finetune":
return partial(ImputationDataset, mean_mask_length=config['mean_mask_length'],
masking_ratio=config['masking_ratio'], mode=config['mask_mode'],
distribution=config['mask_distribution'], exclude_feats=config['exclude_feats'], device=device), collate_unsuperv, UnsupervisedRunner, partial(ClassiregressionDataset,device=device), collate_superv, SupervisedRunner
if task == "imputation":
return partial(ImputationDataset, mean_mask_length=config['mean_mask_length'],
masking_ratio=config['masking_ratio'], mode=config['mask_mode'],
distribution=config['mask_distribution'], exclude_feats=config['exclude_feats'], device=device),\
collate_unsuperv, UnsupervisedRunner
if task == "transduction":
return partial(TransductionDataset, mask_feats=config['mask_feats'],
start_hint=config['start_hint'], end_hint=config['end_hint']), collate_unsuperv, UnsupervisedRunner
if (task == "classification") or (task == "regression"):
return partial(ClassiregressionDataset,device=device), collate_superv, SupervisedRunner
else:
raise NotImplementedError("Task '{}' not implemented".format(task))
def setup(args):
"""Prepare training session: read configuration from file (takes precedence), create directories.
Input:
args: arguments object from argparse
Returns:
config: configuration dictionary
"""
config = args.__dict__ # configuration dictionary
if args.config_filepath is not None:
logger.info("Reading configuration ...")
try: # dictionary containing the entire configuration settings in a hierarchical fashion
config.update(utils.load_config(args.config_filepath))
except:
logger.critical("Failed to load configuration file. Check JSON syntax and verify that files exist")
traceback.print_exc()
sys.exit(1)
# Create output directory
initial_timestamp = datetime.now()
output_dir = config['output_dir']
if not os.path.isdir(output_dir):
raise IOError(
"Root directory '{}', where the directory of the experiment will be created, must exist".format(output_dir))
output_dir = os.path.join(output_dir, config['experiment_name'])
formatted_timestamp = initial_timestamp.strftime("%Y-%m-%d_%H-%M-%S")
config['initial_timestamp'] = formatted_timestamp
if (not config['no_timestamp']) or (len(config['experiment_name']) == 0):
rand_suffix = "".join(random.choices(string.ascii_letters + string.digits, k=3))
output_dir += "_" + formatted_timestamp + "_" + rand_suffix
config['output_dir'] = output_dir
config['save_dir'] = os.path.join(output_dir, 'checkpoints')
config['pred_dir'] = os.path.join(output_dir, 'predictions')
config['tensorboard_dir'] = os.path.join(output_dir, 'tb_summaries')
utils.create_dirs([config['save_dir'], config['pred_dir'], config['tensorboard_dir']])
# Save configuration as a (pretty) json file
with open(os.path.join(output_dir, 'configuration.json'), 'w') as fp:
json.dump(config, fp, indent=4, sort_keys=True)
logger.info("Stored configuration file in '{}'".format(output_dir))
return config
def fold_evaluate(dataset, model, device, loss_module, target_feats, config, dataset_name):
allfolds = {'target_feats': target_feats, # list of len(num_folds), each element: list of target feature integer indices
'predictions': [], # list of len(num_folds), each element: (num_samples, seq_len, feat_dim) prediction per sample
'targets': [], # list of len(num_folds), each element: (num_samples, seq_len, feat_dim) target/original input per sample
'target_masks': [], # list of len(num_folds), each element: (num_samples, seq_len, feat_dim) boolean mask per sample
'metrics': [], # list of len(num_folds), each element: (num_samples, num_metrics) metric per sample
'IDs': []} # list of len(num_folds), each element: (num_samples,) ID per sample
for i, tgt_feats in enumerate(target_feats):
dataset.mask_feats = tgt_feats # set the transduction target features
loader = DataLoader(dataset=dataset,
batch_size=config['batch_size'],
shuffle=False,
num_workers=config['num_workers'],
pin_memory=True,
collate_fn=lambda x: collate_unsuperv(x, max_len=config['max_seq_len']))
evaluator = UnsupervisedRunner(model, loader, device, loss_module,
print_interval=config['print_interval'], console=config['console'])
logger.info("Evaluating {} set, fold: {}, target features: {}".format(dataset_name, i, tgt_feats))
aggr_metrics, per_batch = evaluate(evaluator)
metrics_array = convert_metrics_per_batch_to_per_sample(per_batch['metrics'], per_batch['target_masks'])
metrics_array = np.concatenate(metrics_array, axis=0)
allfolds['metrics'].append(metrics_array)
allfolds['predictions'].append(np.concatenate(per_batch['predictions'], axis=0))
allfolds['targets'].append(np.concatenate(per_batch['targets'], axis=0))
allfolds['target_masks'].append(np.concatenate(per_batch['target_masks'], axis=0))
allfolds['IDs'].append(np.concatenate(per_batch['IDs'], axis=0))
metrics_mean = np.mean(metrics_array, axis=0)
metrics_std = np.std(metrics_array, axis=0)
for m, metric_name in enumerate(list(aggr_metrics.items())[1:]):
logger.info("{}:: Mean: {:.3f}, std: {:.3f}".format(metric_name, metrics_mean[m], metrics_std[m]))
pred_filepath = os.path.join(config['pred_dir'], dataset_name + '_fold_transduction_predictions.pickle')
logger.info("Serializing predictions into {} ... ".format(pred_filepath))
with open(pred_filepath, 'wb') as f:
pickle.dump(allfolds, f, pickle.HIGHEST_PROTOCOL)
def convert_metrics_per_batch_to_per_sample(metrics, target_masks):
"""
Args:
metrics: list of len(num_batches), each element: list of len(num_metrics), each element: (num_active_in_batch,) metric per element
target_masks: list of len(num_batches), each element: (batch_size, seq_len, feat_dim) boolean mask: 1s active, 0s ignore
Returns:
metrics_array = list of len(num_batches), each element: (batch_size, num_metrics) metric per sample
"""
metrics_array = []
for b, batch_target_masks in enumerate(target_masks):
num_active_per_sample = np.sum(batch_target_masks, axis=(1, 2))
batch_metrics = np.stack(metrics[b], axis=1) # (num_active_in_batch, num_metrics)
ind = 0
metrics_per_sample = np.zeros((len(num_active_per_sample), batch_metrics.shape[1])) # (batch_size, num_metrics)
for n, num_active in enumerate(num_active_per_sample):
new_ind = ind + num_active
metrics_per_sample[n, :] = np.sum(batch_metrics[ind:new_ind, :], axis=0)
ind = new_ind
metrics_array.append(metrics_per_sample)
return metrics_array
def evaluate(evaluator):
"""Perform a single, one-off evaluation on an evaluator object (initialized with a dataset)"""
eval_start_time = time.time()
with torch.no_grad():
aggr_metrics, per_batch = evaluator.evaluate(epoch_num=None, keep_all=True)
eval_runtime = time.time() - eval_start_time
print()
print_str = 'Evaluation Summary: '
for k, v in aggr_metrics.items():
if v is not None:
print_str += '{}: {:8f} | '.format(k, v)
logger.info(print_str)
logger.info("Evaluation runtime: {} hours, {} minutes, {} seconds\n".format(*utils.readable_time(eval_runtime)))
return aggr_metrics, per_batch
def validate(val_evaluator, tensorboard_writer, config, best_metrics, best_value, epoch):
"""Run an evaluation on the validation set while logging metrics, and handle outcome"""
eval_start_time = time.time()
with torch.no_grad():
aggr_metrics = val_evaluator.evaluate(epoch)
eval_runtime = time.time() - eval_start_time
global val_times
val_times["total_time"] += eval_runtime
val_times["count"] += 1
avg_val_time = val_times["total_time"] / val_times["count"]
condition = (aggr_metrics['loss'] < best_value)
if condition or epoch==1:
best_value = aggr_metrics['loss']
best_metrics = aggr_metrics.copy()
return aggr_metrics, best_metrics, best_value, condition
def check_progress(epoch):
if epoch in [100, 140, 160, 220, 280, 340]:
return True
else:
return False
class BaseRunner(object):
def __init__(self, model, dataloader, device, loss_module, optimizer=None, l2_reg=None, print_interval=10, console=True):
self.model = model
self.dataloader = dataloader
self.device = device
self.optimizer = optimizer
self.loss_module = loss_module
self.l2_reg = l2_reg
self.print_interval = print_interval
self.printer = utils.Printer(console=console)
self.epoch_metrics = OrderedDict()
def train_epoch(self, epoch_num=None):
raise NotImplementedError('Please override in child class')
def evaluate(self, epoch_num=None, keep_all=True):
raise NotImplementedError('Please override in child class')
def print_callback(self, i_batch, metrics, prefix=''):
total_batches = len(self.dataloader)
template = "{:5.1f}% | batch: {:9d} of {:9d}"
content = [100 * (i_batch / total_batches), i_batch, total_batches]
for met_name, met_value in metrics.items():
template += "\t|\t{}".format(met_name) + ": {:g}"
content.append(met_value)
dyn_string = template.format(*content)
dyn_string = prefix + dyn_string
self.printer.print(dyn_string)
class UnsupervisedRunner(BaseRunner):
def train_epoch(self, epoch_num=None):
self.model = self.model.train()
epoch_loss = 0 # total loss of epoch
total_active_elements = 0 # total unmasked elements in epoch
for i, batch in enumerate(self.dataloader):
X, targets, target_masks, padding_masks, IDs = batch
targets = targets.to(self.device)
target_masks = target_masks.to(self.device) # 1s: mask and predict, 0s: unaffected input (ignore)
padding_masks = padding_masks.to(self.device) # 0s: ignore
predictions = self.model(X.to(self.device), padding_masks) # (batch_size, padded_length, feat_dim)
# Cascade noise masks (batch_size, padded_length, feat_dim) and padding masks (batch_size, padded_length)
target_masks = target_masks * padding_masks.unsqueeze(-1)
loss = self.loss_module(predictions, targets, target_masks) # (num_active,) individual loss (square error per element) for each active value in batch
batch_loss = torch.sum(loss)
mean_loss = batch_loss / len(loss) # mean loss (over active elements) used for optimization
if self.l2_reg:
total_loss = mean_loss + self.l2_reg * l2_reg_loss(self.model)
else:
total_loss = mean_loss
# Zero gradients, perform a backward pass, and update the weights.
self.optimizer.zero_grad()
total_loss.backward()
# torch.nn.utils.clip_grad_value_(self.model.parameters(), clip_value=1.0)
torch.nn.utils.clip_grad_norm_(self.model.parameters(), max_norm=4.0)
self.optimizer.step()
metrics = {"loss": mean_loss.item()}
if i % self.print_interval == 0:
ending = "" if epoch_num is None else 'Epoch {} '.format(epoch_num)
self.print_callback(i, metrics, prefix='Training ' + ending)
with torch.no_grad():
total_active_elements += len(loss)
epoch_loss += batch_loss.item() # add total loss of batch
epoch_loss = epoch_loss / total_active_elements # average loss per element for whole epoch
self.epoch_metrics['epoch'] = epoch_num
self.epoch_metrics['loss'] = epoch_loss
return self.epoch_metrics
def evaluate(self, epoch_num=None, keep_all=False):
self.model = self.model.eval()
epoch_loss = 0 # total loss of epoch
total_active_elements = 0 # total unmasked elements in epoch
if keep_all:
per_batch = {'target_masks': [], 'targets': [], 'predictions': [], 'metrics': [], 'IDs': []}
for i, batch in enumerate(self.dataloader):
X, targets, target_masks, padding_masks, IDs = batch
targets = targets.to(self.device)
target_masks = target_masks.to(self.device) # 1s: mask and predict, 0s: unaffected input (ignore)
padding_masks = padding_masks.to(self.device) # 0s: ignore
# TODO: for debugging
# input_ok = utils.check_tensor(X, verbose=False, zero_thresh=1e-8, inf_thresh=1e4)
# if not input_ok:
# print("Input problem!")
# ipdb.set_trace()
#
# utils.check_model(self.model, verbose=False, stop_on_error=True)
predictions = self.model(X.to(self.device), padding_masks) # (batch_size, padded_length, feat_dim)
# Cascade noise masks (batch_size, padded_length, feat_dim) and padding masks (batch_size, padded_length)
target_masks = target_masks * padding_masks.unsqueeze(-1)
loss = self.loss_module(predictions, targets, target_masks) # (num_active,) individual loss (square error per element) for each active value in batch
batch_loss = torch.sum(loss).cpu().item()
mean_loss = batch_loss / len(loss) # mean loss (over active elements) used for optimization the batch
if keep_all:
per_batch['target_masks'].append(target_masks.cpu().numpy())
per_batch['targets'].append(targets.cpu().numpy())
per_batch['predictions'].append(predictions.cpu().numpy())
per_batch['metrics'].append([loss.cpu().numpy()])
per_batch['IDs'].append(IDs)
metrics = {"loss": mean_loss}
if i % self.print_interval == 0:
ending = "" if epoch_num is None else 'Epoch {} '.format(epoch_num)
self.print_callback(i, metrics, prefix='Evaluating ' + ending)
total_active_elements += len(loss)
epoch_loss += batch_loss # add total loss of batch
epoch_loss = epoch_loss / total_active_elements # average loss per element for whole epoch
self.epoch_metrics['epoch'] = epoch_num
self.epoch_metrics['loss'] = epoch_loss
if keep_all:
return self.epoch_metrics, per_batch
else:
return self.epoch_metrics
class SupervisedRunner(BaseRunner):
def __init__(self, *args, **kwargs):
super(SupervisedRunner, self).__init__(*args, **kwargs)
if isinstance(args[3], torch.nn.CrossEntropyLoss):
self.classification = True # True if classification, False if regression
self.analyzer = analysis.Analyzer(print_conf_mat=True)
else:
self.classification = False
def train_epoch(self, epoch_num=None):
self.model = self.model.train()
epoch_loss = 0 # total loss of epoch
total_samples = 0 # total samples in epoch
for i, batch in enumerate(self.dataloader):
X, targets, padding_masks, IDs = batch
targets = targets.to(self.device)
padding_masks = padding_masks.to(self.device) # 0s: ignore
# regression: (batch_size, num_labels); classification: (batch_size, num_classes) of logits
predictions = self.model(X.to(self.device), padding_masks)
targets_label = targets.view((1, -1))
pred_label = torch.argmax(predictions, axis=1)
step_accu = torch.sum(pred_label == targets_label, dim=1)
loss = self.loss_module(predictions, targets) # (batch_size,) loss for each sample in the batch
batch_loss = torch.sum(loss)
mean_loss = batch_loss / len(loss) # mean loss (over samples) used for optimization
if self.l2_reg:
total_loss = mean_loss + self.l2_reg * l2_reg_loss(self.model)
else:
total_loss = mean_loss
# Zero gradients, perform a backward pass, and update the weights.
self.optimizer.zero_grad()
total_loss.backward()
# torch.nn.utils.clip_grad_value_(self.model.parameters(), clip_value=1.0)
torch.nn.utils.clip_grad_norm_(self.model.parameters(), max_norm=4.0)
# multi gpu
self.optimizer.step()
#self.optimizer.step()
metrics = {"loss": mean_loss.item(), "accuracy":step_accu.cpu().item()/len(loss)}
if i % self.print_interval == 0:
ending = "" if epoch_num is None else 'Epoch {} '.format(epoch_num)
self.print_callback(i, metrics, prefix='Supervised Training ' + ending)
with torch.no_grad():
total_samples += len(loss)
epoch_loss += batch_loss.item() # add total loss of batch
epoch_loss = epoch_loss / total_samples # average loss per sample for whole epoch
self.epoch_metrics['epoch'] = epoch_num
self.epoch_metrics['loss'] = epoch_loss
return self.epoch_metrics
def evaluate(self, epoch_num=None, keep_all=True):
self.model = self.model.eval()
epoch_loss = 0 # total loss of epoch
total_samples = 0 # total samples in epoch
epoch_accu = 0
sum_len = 0
for i, batch in enumerate(self.dataloader):
X, targets, padding_masks, IDs = batch # origin:ids
targets = targets.to(self.device)
padding_masks = padding_masks.to(self.device) # 0s: ignore
# regression: (batch_size, num_labels); classification: (batch_size, num_classes) of logits
predictions = self.model(X.to(self.device), padding_masks)
targets_label = targets.view((1, -1))
pred_label = torch.argmax(predictions, axis=1)
epoch_accu += torch.sum(pred_label == targets_label, dim=1)
sum_len += len(targets)
loss = self.loss_module(predictions, targets) # (batch_size,) loss for each sample in the batch
batch_loss = torch.sum(loss).cpu().item()
mean_loss = batch_loss / len(loss) # mean loss (over samples)
metrics = {"loss": mean_loss, "accuracy":epoch_accu.cpu().item()/len(loss)}
if i % self.print_interval == 0:
ending = "" if epoch_num is None else 'Epoch {} '.format(epoch_num)
self.print_callback(i, metrics, prefix='Supervised Evaluation ' + ending)
total_samples += len(loss)
epoch_loss += batch_loss # add total loss of batch
epoch_loss = epoch_loss / total_samples # average loss per element for whole epoch
epoch_accu = epoch_accu / sum_len
self.epoch_metrics['epoch'] = epoch_num
self.epoch_metrics['loss'] = epoch_loss
self.epoch_metrics['accuracy'] = epoch_accu
return self.epoch_metrics
================================================
FILE: ts_classification_methods/tst_cls/src/utils/__init__.py
================================================
================================================
FILE: ts_classification_methods/tst_cls/src/utils/analysis.py
================================================
"""
Collection of functions which enable the evaluation of a classifier's performance,
by showing confusion matrix, accuracy, recall, precision etc.
"""
import numpy as np
import sys
import matplotlib.pyplot as plt
from sklearn import metrics
from tabulate import tabulate
import math
import logging
from datetime import datetime
def acc_top_k(predictions, y_true):
"""Accuracy when allowing for correct class being in the top k predictions.
Arguments:
predictions: (N_samples, k) array of top class indices (pre-sorted class indices based on score) per sample
y_true: N_samples 1D-array of ground truth labels (integer indices)
Returns:
length k 1D-array of accuracy when allowing for correct class being in top 1, 2, ... k predictions"""
y_true = y_true[:, np.newaxis]
# Create upper triangular matrix of ones, to be used in construction of V
building_blocks = np.zeros((predictions.shape[1], predictions.shape[1]))
building_blocks[np.triu_indices(predictions.shape[1])] = 1
# A matrix of the same shape as predictions. For each sample, the index corresponding
# to a correct prediction is 1, as well as all following indices.
# Example: y_true = [1,0], predictions = [[1 5 4],[2 0 3]]. Then: V = [[1 1 1],[0 1 1]]
V = np.zeros_like(predictions, dtype=int) # validity matrix
sample_ind, rank_ind = np.where(predictions == y_true)
V[sample_ind, :] = building_blocks[rank_ind, :]
return np.mean(V, axis=0)
def accuracy(y_pred, y_true, excluded_labels=None):
"""A simple accuracy calculator, which can ignore labels specified in a list"""
if excluded_labels is None:
return np.mean(y_pred == y_true)
else:
included = (y_pred != excluded_labels[0]) & (y_true != excluded_labels[0])
# The following extra check (rather than initializing with an array of ones)
# is done because a single excluded label is the most common case
if len(excluded_labels) > 1:
for label in excluded_labels[1:]:
included &= (y_pred != label) & (y_true != label)
return np.mean(y_pred[included] == y_true[included])
def precision(y_true, y_pred, label):
"""Returns precision for the specified class index"""
predicted_in_C = (y_pred == label)
num_pred_in_C = np.sum(predicted_in_C)
if num_pred_in_C == 0:
return 0
return np.sum(y_true[predicted_in_C] == label) / num_pred_in_C
def recall(y_true, y_pred, label):
"""Returns recall for the specified class index"""
truly_in_C = (y_true == label)
num_truly_in_C = np.sum(truly_in_C)
if num_truly_in_C == 0:
return 0 # or NaN?
return np.sum(y_pred[truly_in_C] == label) / num_truly_in_C
def limiter(metric_functions, y_true, y_pred, y_scores, score_thr, label):
"""Wraps a list of metric functions, i.e precison or recall, by ingoring predictions under the
specified threshold for a specific class.
"""
ltd_pred = np.copy(y_pred)
ltd_pred[(ltd_pred == label) & (y_scores < score_thr)] = -1
output = [func(y_true, ltd_pred, label) for func in metric_functions]
return output
def prec_rec_parametrized_by_thr(y_true, y_pred, y_scores, label, Npoints, min_score=None, max_score=None):
"""Returns an array showing for a specified class of interest, how precision and recall change as a function of
the score threshold (parameter).
Input:
y_true: 1D array of true labels (class indices)
y_pred: 1D array of predicted labels (class indices)
y_scores: 1D array of scores corresponding to predictions in y_pred
label: class label of interest
Npoints: number of score threshold points. Defines "resolution" of the parameter (score threshold)
min_score, max_score: if specified, they impose lower and upper bound limits for the parameter (score thr.)
Output:
prec_rec: ndarray of shape (Npoints, 2), containing a precision (column 0) and recall (column 1) value for each
score threshold value
"""
if (min_score is None) or (max_score is None):
predicted_in_C = (y_pred == label)
min_score = 0.99 * np.amin(y_scores[predicted_in_C]) # guarantees that all predictions are kept
max_score = 1.01 * np.amax(y_scores[predicted_in_C]) # guarantees that no prediction is kept
grid = np.linspace(min_score, max_score, Npoints)
measure = lambda x: limiter([precision, recall], y_true, y_pred, y_scores, x, label)
return np.array(map(measure, grid)), grid
def plot_prec_vs_rec(score_grid, rec, prec, prec_requirement=None, thr_opt=None, title=None, show=True, save_as=None):
"""Plots a figure depicting precision and recall as a function of the score threshold.
Optionally also depicts an imposed precision requirement and a chosen score threshold value."""
if not (thr_opt is None):
thr_opt = thr_opt if not (math.isinf(thr_opt)) else None
plt.figure()
if title:
plt.suptitle(title)
# Recall and Precision vs. Score Threshold
plt.subplot(211)
l_rec, = plt.plot(score_grid, rec, '.-')
plt.hold(True)
l_prec, = plt.plot(score_grid, prec, 'g.-')
plt.ylim((0, 1.01))
plt.xlabel('score threshold')
legend_lines = [l_rec, l_prec]
legend_labels = ['recall', 'precision']
if prec_requirement:
l_prec_req = plt.axhline(prec_requirement, color='r', linestyle='--')
legend_lines.append(l_prec_req)
legend_labels.append('prec. req.')
if not (thr_opt is None):
l_score_thr = plt.axvline(thr_opt, color='r')
legend_lines.append(l_score_thr)
legend_labels.append('opt. thr.')
plt.legend(legend_lines, legend_labels, loc='lower right', fontsize=10)
# Recall vs. Precision
plt.subplot(212)
plt.plot(prec, rec, '.-')
plt.ylim((0, 1.01))
plt.xlim((0, 1.01))
plt.ylabel('recall')
plt.xlabel('precision')
if prec_requirement:
l_prec_req = plt.axvline(prec_requirement, color='r', linestyle='--')
plt.legend([l_prec_req], ['precision req.'], loc='lower left', fontsize=10)
if save_as:
plt.savefig(save_as, bbox_inches='tight', format='pdf')
if show:
plt.tight_layout()
plt.show(block=False)
def plot_confusion_matrix(ConfMat, label_strings=None, title='Confusion matrix', cmap=plt.cm.get_cmap('Blues')):
"""Plot confusion matrix in a separate window"""
plt.imshow(ConfMat, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if label_strings:
tick_marks = np.arange(len(label_strings))
plt.xticks(tick_marks, label_strings, rotation=90)
plt.yticks(tick_marks, label_strings)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
def print_confusion_matrix(ConfMat, label_strings=None, title='Confusion matrix'):
"""Print confusion matrix as text to terminal"""
if label_strings is None:
label_strings = ConfMat.shape[0] * ['']
print(title)
print(len(title) * '-')
# Make printable matrix:
print_mat = []
for i, row in enumerate(ConfMat):
print_mat.append([label_strings[i]] + list(row))
print(tabulate(print_mat, headers=['True\Pred'] + label_strings, tablefmt='orgtbl'))
class Analyzer(object):
def __init__(self, maxcharlength=35, plot=False, print_conf_mat=False, output_filepath=None):
self.maxcharlength = maxcharlength
self.plot = plot
self.print_conf_mat = print_conf_mat
# create logger
self.logID = str(
datetime.now()) # this is to enable individual logging configuration between different instances
self.logger = logging.getLogger(self.logID)
self.logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(message)s')
# create console handler
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.INFO)
ch.setFormatter(formatter)
self.logger.addHandler(ch)
if output_filepath:
# create file handler
fh = logging.FileHandler(output_filepath)
fh.setLevel(logging.INFO)
fh.setFormatter(formatter)
self.logger.addHandler(fh)
def show_acc_top_k_improvement(self, y_pred, y_true, k=5, inp='scores'):
"""
Show how accuracy improves when considering the event of the correct label being among the top k predictions as a successful prediction
Arguments:
k: integer k mentioned above
inp: string, one of 'scores' or 'indices', defining assumptions for `y_pred`, see below
y_pred: If inp is 'indices', then this is a (N_samples, k) array of top class indices (pre-sorted class indices based on score) per sample
If inp is 'scores', then this is assummed to be a (N_samples, C) array of class scores per sample, where C is the number of classes
y_true: (N_samples,) 1D numpy array of ground truth labels (integer indices)
"""
print('How accuracy improves when allowing correct result being in the top 1, 2, ..., k predictions:\n')
if inp == 'scores':
predictions = np.argsort(y_pred, axis=1)[:, ::-1] # sort in descending order
else:
predictions = y_pred
predictions = predictions[:, :min(k, predictions.shape[1])] # take top k
accuracy_per_rank = acc_top_k(predictions, y_true)
row1 = ['k'] + range(1, len(accuracy_per_rank) + 1)
row2 = ['Accuracy'] + list(accuracy_per_rank)
print(tabulate([row1, row2], tablefmt='orgtbl'))
if self.plot:
from matplotlib.ticker import MaxNLocator
ax = plt.figure().gca()
plt.plot(np.arange(1, k + 1, dtype=int), accuracy_per_rank, '.-')
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
plt.xlabel('Number of allowed predictions (k)')
plt.ylabel('Cumulative accuracy\n(prob. of correct result being in top k pred.)')
plt.title('Cumulative Accuracy vs Number of allowed predictions')
plt.show(block=False)
return accuracy_per_rank
def generate_classification_report(self, digits=3, number_of_thieves=2, maxcharlength=35):
"""
Returns a string of a report for given metric arrays (array length equals the number of classes).
Called internally by `analyze_classification`.
digits: number of digits after . for displaying results
number_of_thieves: number of biggest thieves to report
maxcharlength: max. number of characters to use when displaying thief names
"""
relative_freq = self.support / np.sum(self.support) # relative frequencies of each class in the true lables
sorted_class_indices = np.argsort(relative_freq)[
::-1] # sort by "importance" of classes (i.e. occurance frequency)
last_line_heading = 'avg / total'
width = max(len(cn) for cn in self.existing_class_names)
width = max(width, len(last_line_heading), digits)
headers = ["precision", "recall", "f1-score", "rel. freq.", "abs. freq.", "biggest thieves"]
fmt = '%% %ds' % width # first column: class name
fmt += ' '
fmt += ' '.join(['% 10s' for _ in headers[:-1]])
fmt += '|\t % 5s'
fmt += '\n'
headers = [""] + headers
report = fmt % tuple(headers)
report += '\n'
for i in sorted_class_indices:
values = [self.existing_class_names[i]]
for v in (self.precision[i], self.recall[i], self.f1[i],
relative_freq[i]): # v is NOT a tuple, just goes through this list 1 el. at a time
values += ["{0:0.{1}f}".format(v, digits)]
values += ["{}".format(self.support[i])]
thieves = np.argsort(self.ConfMatrix_normalized_row[i, :])[::-1][
:number_of_thieves + 1] # other class indices "stealing" from class. May still contain self
thieves = thieves[thieves != i] # exclude self at this point
steal_ratio = self.ConfMatrix_normalized_row[i, thieves]
thieves_names = [
self.existing_class_names[thief][:min(maxcharlength, len(self.existing_class_names[thief]))] for thief
in thieves] # a little inefficient but inconsequential
string_about_stealing = ""
for j in range(len(thieves)):
string_about_stealing += "{0}: {1:.3f},\t".format(thieves_names[j], steal_ratio[j])
values += [string_about_stealing]
report += fmt % tuple(values)
report += '\n' + 100 * '-' + '\n'
# compute averages/sums
values = [last_line_heading]
for v in (np.average(self.precision, weights=relative_freq),
np.average(self.recall, weights=relative_freq),
np.average(self.f1, weights=relative_freq)):
values += ["{0:0.{1}f}".format(v, digits)]
values += ['{0}'.format(np.sum(relative_freq))]
values += ['{0}'.format(np.sum(self.support))]
values += ['']
# make last ("Total") line for report
report += fmt % tuple(values)
return report
def get_avg_prec_recall(self, ConfMatrix, existing_class_names, excluded_classes=None):
"""Get average recall and precision, using class frequencies as weights, optionally excluding
specified classes"""
class2ind = dict(zip(existing_class_names, range(len(existing_class_names))))
included_c = np.full(len(existing_class_names), 1, dtype=bool)
if not (excluded_classes is None):
excl_ind = [class2ind[excl_class] for excl_class in excluded_classes]
included_c[excl_ind] = False
pred_per_class = np.sum(ConfMatrix, axis=0)
nonzero_pred = (pred_per_class > 0)
included = included_c & nonzero_pred
support = np.sum(ConfMatrix, axis=1)
weights = support[included] / np.sum(support[included])
prec = np.diag(ConfMatrix[included, :][:, included]) / pred_per_class[included]
prec_avg = np.dot(weights, prec)
# rec = np.diag(ConfMatrix[included_c,:][:,included_c])/support[included_c]
rec_avg = np.trace(ConfMatrix[included_c, :][:, included_c]) / np.sum(support[included_c])
return prec_avg, rec_avg
def prec_rec_histogram(self, precision, recall, binedges=None):
"""Make a histogram with the distribution of classes with respect to precision and recall
"""
if binedges is None:
binedges = np.concatenate((np.arange(0, 0.6, 0.2), np.arange(0.6, 1.01, 0.1)), axis=0)
binedges = np.append(binedges, binedges[-1] + 0.1) # add 1 extra bin at the end for >= 1
hist_precision, binedges = np.histogram(precision, binedges)
hist_recall, binedges = np.histogram(recall, binedges)
print("\n\nDistribution of classes with respect to PRECISION: ")
for b in range(len(binedges) - 1):
print("[{:.1f}, {:.1f}): {}".format(binedges[b], binedges[b + 1], hist_precision[b]))
print("\n\nDistribution of classes with respect to RECALL: ")
for b in range(len(binedges) - 1):
print("[{:.1f}, {:.1f}): {}".format(binedges[b], binedges[b + 1], hist_recall[b]))
if self.plot:
plt.figure()
plt.subplot(121)
widths = np.diff(binedges)
plt.bar(binedges[:-1], hist_precision, width=widths, align='edge')
plt.xlim(0, 1)
ax = plt.gca()
ax.set_xticks(binedges)
plt.xlabel('Precision')
plt.ylabel('Number of classes')
plt.title("Distribution of classes with respect to precision")
plt.subplot(122)
widths = np.diff(binedges)
plt.bar(binedges[:-1], hist_recall, width=widths, align='edge')
plt.xlim(0, 1)
ax = plt.gca()
ax.set_xticks(binedges)
plt.xlabel('Recall')
plt.ylabel('Number of classes')
plt.title("Distribution of classes with respect to recall")
plt.show(block=False)
def analyze_classification(self, y_pred, y_true, class_names, excluded_classes=None):
"""
For an array of label predictions and the respective true labels, shows confusion matrix, accuracy, recall, precision etc:
Input:
y_pred: 1D array of predicted labels (class indices)
y_true: 1D array of true labels (class indices)
class_names: 1D array or list of class names in the order of class indices.
Could also be integers [0, 1, ..., num_classes-1].
excluded_classes: list of classes to be excluded from average precision, recall calculation (e.g. OTHER)
"""
# Trim class_names to include only classes existing in y_pred OR y_true
in_pred_labels = set(list(y_pred))
in_true_labels = set(list(y_true))
self.existing_class_ind = sorted(list(in_pred_labels | in_true_labels))
class_strings = [str(name) for name in class_names] # needed in case `class_names` elements are not strings
self.existing_class_names = [class_strings[ind][:min(self.maxcharlength, len(class_strings[ind]))] for ind in
self.existing_class_ind] # a little inefficient but inconsequential
# Confusion matrix
ConfMatrix = metrics.confusion_matrix(y_true, y_pred)
'''
if self.print_conf_mat:
print_confusion_matrix(ConfMatrix, label_strings=self.existing_class_names, title='Confusion matrix')
print('\n')
if self.plot:
plt.figure()
plot_confusion_matrix(ConfMatrix, self.existing_class_names)
'''
# Normalize the confusion matrix by row (i.e by the number of samples in each class)
self.ConfMatrix_normalized_row = ConfMatrix.astype('float') / ConfMatrix.sum(axis=1)[:, np.newaxis]
'''
if self.print_conf_mat:
print_confusion_matrix(self.ConfMatrix_normalized_row, label_strings=self.existing_class_names,
title='Confusion matrix normalized by row')
print('\n')
if self.plot:
plt.figure()
plot_confusion_matrix(self.ConfMatrix_normalized_row, label_strings=self.existing_class_names,
title='Confusion matrix normalized by row')
plt.show(block=False)
'''
# Analyze results
self.total_accuracy = np.trace(ConfMatrix) / len(y_true)
print('Overall accuracy: {:.3f}\n'.format(self.total_accuracy))
# returns metrics for each class, in the same order as existing_class_names
self.precision, self.recall, self.f1, self.support = metrics.precision_recall_fscore_support(y_true, y_pred,
labels=self.existing_class_ind)
# Print report
#print(self.generate_classification_report())
# Calculate average precision and recall
self.prec_avg, self.rec_avg = self.get_avg_prec_recall(ConfMatrix, self.existing_class_names, excluded_classes)
if excluded_classes:
print(
"\nAverage PRECISION: {:.2f}\n(using class frequencies as weights, excluding classes with no predictions and predictions in '{}')".format(
self.prec_avg, ', '.join(excluded_classes)))
print(
"\nAverage RECALL (= ACCURACY): {:.2f}\n(using class frequencies as weights, excluding classes in '{}')".format(
self.rec_avg, ', '.join(excluded_classes)))
# Make a histogram with the distribution of classes with respect to precision and recall
self.prec_rec_histogram(self.precision, self.recall)
return {"total_accuracy": self.total_accuracy, "precision": self.precision, "recall": self.recall,
"f1": self.f1, "support": self.support, "prec_avg": self.prec_avg, "rec_avg": self.rec_avg}
================================================
FILE: ts_classification_methods/tst_cls/src/utils/utils.py
================================================
import json
import os
import sys
import builtins
import functools
import time
import ipdb
from copy import deepcopy
import numpy as np
import torch
import xlrd
import xlwt
from xlutils.copy import copy
import logging
logging.basicConfig(format='%(asctime)s | %(levelname)s : %(message)s', level=logging.INFO)
logger = logging.getLogger(__name__)
def timer(func):
"""Print the runtime of the decorated function"""
@functools.wraps(func)
def wrapper_timer(*args, **kwargs):
start_time = time.perf_counter() # 1
value = func(*args, **kwargs)
end_time = time.perf_counter() # 2
run_time = end_time - start_time # 3
print(f"Finished {func.__name__!r} in {run_time} secs")
return value
return wrapper_timer
def save_model(path, epoch, model, optimizer=None):
if isinstance(model, torch.nn.DataParallel):
state_dict = model.module.state_dict()
else:
state_dict = model.state_dict()
data = {'epoch': epoch,
'state_dict': state_dict}
if not (optimizer is None):
data['optimizer'] = optimizer.state_dict()
torch.save(data, path)
def load_model(model, model_path, optimizer=None, resume=False, change_output=False,
lr=None, lr_step=None, lr_factor=None):
start_epoch = 0
checkpoint = torch.load(model_path, map_location=lambda storage, loc: storage)
state_dict = deepcopy(checkpoint['state_dict'])
if change_output:
for key, val in checkpoint['state_dict'].items():
if key.startswith('output_layer'):
state_dict.pop(key)
model.load_state_dict(state_dict, strict=False)
print('Loaded model from {}. Epoch: {}'.format(model_path, checkpoint['epoch']))
# resume optimizer parameters
if optimizer is not None and resume:
if 'optimizer' in checkpoint:
optimizer.load_state_dict(checkpoint['optimizer'])
start_epoch = checkpoint['epoch']
start_lr = lr
for i in range(len(lr_step)):
if start_epoch >= lr_step[i]:
start_lr *= lr_factor[i]
for param_group in optimizer.param_groups:
param_group['lr'] = start_lr
print('Resumed optimizer with start lr', start_lr)
else:
print('No optimizer parameters in checkpoint.')
if optimizer is not None:
return model, optimizer, start_epoch
else:
return model
def load_config(config_filepath):
"""
Using a json file with the master configuration (config file for each part of the pipeline),
return a dictionary containing the entire configuration settings in a hierarchical fashion.
"""
with open(config_filepath) as cnfg:
config = json.load(cnfg)
return config
def create_dirs(dirs):
"""
Input:
dirs: a list of directories to create, in case these directories are not found
Returns:
exit_code: 0 if success, -1 if failure
"""
try:
for dir_ in dirs:
if not os.path.exists(dir_):
os.makedirs(dir_)
return 0
except Exception as err:
print("Creating directories error: {0}".format(err))
exit(-1)
def export_performance_metrics(filepath, metrics_table, header, book=None, sheet_name="metrics"):
"""Exports performance metrics on the validation set for all epochs to an excel file"""
if book is None:
book = xlwt.Workbook() # new excel work book
book = write_table_to_sheet([header] + metrics_table, book, sheet_name=sheet_name)
book.save(filepath)
logger.info("Exported per epoch performance metrics in '{}'".format(filepath))
return book
def write_row(sheet, row_ind, data_list):
"""Write a list to row_ind row of an excel sheet"""
row = sheet.row(row_ind)
for col_ind, col_value in enumerate(data_list):
row.write(col_ind, col_value)
return
def write_table_to_sheet(table, work_book, sheet_name=None):
"""Writes a table implemented as a list of lists to an excel sheet in the given work book object"""
sheet = work_book.add_sheet(sheet_name)
for row_ind, row_list in enumerate(table):
write_row(sheet, row_ind, row_list)
return work_book
def export_record(filepath, values):
"""Adds a list of values as a bottom row of a table in a given excel file"""
read_book = xlrd.open_workbook(filepath, formatting_info=True)
read_sheet = read_book.sheet_by_index(0)
last_row = read_sheet.nrows
work_book = copy(read_book)
sheet = work_book.get_sheet(0)
write_row(sheet, last_row, values)
work_book.save(filepath)
def register_record(filepath, timestamp, experiment_name, best_metrics, final_metrics=None, comment=''):
"""
Adds the best and final metrics of a given experiment as a record in an excel sheet with other experiment records.
Creates excel sheet if it doesn't exist.
Args:
filepath: path of excel file keeping records
timestamp: string
experiment_name: string
best_metrics: dict of metrics at best epoch {metric_name: metric_value}. Includes "epoch" as first key
final_metrics: dict of metrics at final epoch {metric_name: metric_value}. Includes "epoch" as first key
comment: optional description
"""
metrics_names, metrics_values = zip(*best_metrics.items())
row_values = [timestamp, experiment_name, comment] + list(metrics_values)
if final_metrics is not None:
final_metrics_names, final_metrics_values = zip(*final_metrics.items())
row_values += list(final_metrics_values)
if not os.path.exists(filepath): # Create a records file for the first time
logger.warning("Records file '{}' does not exist! Creating new file ...".format(filepath))
directory = os.path.dirname(filepath)
if len(directory) and not os.path.exists(directory):
os.makedirs(directory)
header = ["Timestamp", "Name", "Comment"] + ["Best " + m for m in metrics_names]
if final_metrics is not None:
header += ["Final " + m for m in final_metrics_names]
book = xlwt.Workbook() # excel work book
book = write_table_to_sheet([header, row_values], book, sheet_name="records")
book.save(filepath)
else:
try:
export_record(filepath, row_values)
except Exception as x:
alt_path = os.path.join(os.path.dirname(filepath), "record_" + experiment_name)
logger.error("Failed saving in: '{}'! Will save here instead: {}".format(filepath, alt_path))
export_record(alt_path, row_values)
filepath = alt_path
logger.info("Exported performance record to '{}'".format(filepath))
class Printer(object):
"""Class for printing output by refreshing the same line in the console, e.g. for indicating progress of a process"""
def __init__(self, console=True):
if console:
self.print = self.dyn_print
else:
self.print = builtins.print
@staticmethod
def dyn_print(data):
"""Print things to stdout on one line, refreshing it dynamically"""
sys.stdout.write("\r\x1b[K" + data.__str__())
sys.stdout.flush()
def readable_time(time_difference):
"""Convert a float measuring time difference in seconds into a tuple of (hours, minutes, seconds)"""
hours = time_difference // 3600
minutes = (time_difference // 60) % 60
seconds = time_difference % 60
return hours, minutes, seconds
# def check_model1(model, verbose=False, stop_on_error=False):
# status_ok = True
# for name, param in model.named_parameters():
# nan_grads = torch.isnan(param.grad)
# nan_params = torch.isnan(param)
# if nan_grads.any() or nan_params.any():
# status_ok = False
# print("Param {}: {}/{} nan".format(name, torch.sum(nan_params), param.numel()))
# if verbose:
# print(param)
# print("Grad {}: {}/{} nan".format(name, torch.sum(nan_grads), param.grad.numel()))
# if verbose:
# print(param.grad)
# if stop_on_error:
# ipdb.set_trace()
# if status_ok:
# print("Model Check: OK")
# else:
# print("Model Check: PROBLEM")
def check_model(model, verbose=False, zero_thresh=1e-8, inf_thresh=1e6, stop_on_error=False):
status_ok = True
for name, param in model.named_parameters():
param_ok = check_tensor(param, verbose=verbose, zero_thresh=zero_thresh, inf_thresh=inf_thresh)
if not param_ok:
status_ok = False
print("Parameter '{}' PROBLEM".format(name))
grad_ok = True
if param.grad is not None:
grad_ok = check_tensor(param.grad, verbose=verbose, zero_thresh=zero_thresh, inf_thresh=inf_thresh)
if not grad_ok:
status_ok = False
print("Gradient of parameter '{}' PROBLEM".format(name))
if stop_on_error and not (param_ok and grad_ok):
ipdb.set_trace()
if status_ok:
print("Model Check: OK")
else:
print("Model Check: PROBLEM")
def check_tensor(X, verbose=True, zero_thresh=1e-8, inf_thresh=1e6):
is_nan = torch.isnan(X)
if is_nan.any():
print("{}/{} nan".format(torch.sum(is_nan), X.numel()))
return False
num_small = torch.sum(torch.abs(X) < zero_thresh)
num_large = torch.sum(torch.abs(X) > inf_thresh)
if verbose:
print("Shape: {}, {} elements".format(X.shape, X.numel()))
print("No 'nan' values")
print("Min: {}".format(torch.min(X)))
print("Median: {}".format(torch.median(X)))
print("Max: {}".format(torch.max(X)))
print("Histogram of values:")
values = X.view(-1).detach().numpy()
hist, binedges = np.histogram(values, bins=20)
for b in range(len(binedges) - 1):
print("[{}, {}): {}".format(binedges[b], binedges[b + 1], hist[b]))
print("{}/{} abs. values < {}".format(num_small, X.numel(), zero_thresh))
print("{}/{} abs. values > {}".format(num_large, X.numel(), inf_thresh))
if num_large:
print("{}/{} abs. values > {}".format(num_large, X.numel(), inf_thresh))
return False
return True
def count_parameters(model, trainable=False):
if trainable:
return sum(p.numel() for p in model.parameters() if p.requires_grad)
else:
return sum(p.numel() for p in model.parameters())
def recursively_hook(model, hook_fn):
for name, module in model.named_children(): #model._modules.items():
if len(list(module.children())) > 0: # if not leaf node
for submodule in module.children():
recursively_hook(submodule, hook_fn)
else:
module.register_forward_hook(hook_fn)
def compute_loss(net: torch.nn.Module,
dataloader: torch.utils.data.DataLoader,
loss_function: torch.nn.Module,
device: torch.device = 'cpu') -> torch.Tensor:
"""Compute the loss of a network on a given dataset.
Does not compute gradient.
Parameters
----------
net:
Network to evaluate.
dataloader:
Iterator on the dataset.
loss_function:
Loss function to compute.
device:
Torch device, or :py:class:`str`.
Returns
-------
Loss as a tensor with no grad.
"""
running_loss = 0
with torch.no_grad():
for x, y in dataloader:
netout = net(x.to(device)).cpu()
running_loss += loss_function(y, netout)
return running_loss / len(dataloader)
================================================
FILE: ts_classification_methods/tstcc_cls/__init__.py
================================================
================================================
FILE: ts_classification_methods/tstcc_cls/config_files/ucr_Configs.py
================================================
class Config(object):
def __init__(self):
# model configs
self.input_channels = 1
self.kernel_size = 8
self.stride = 1
self.final_out_channels = 128
self.num_classes = 3
self.dropout = 0.35
self.features_len = 18
# training configs
self.num_epoch = 600
# optimizer parameters
self.beta1 = 0.9
self.beta2 = 0.99
self.lr = 3e-4
# data parameters
self.drop_last = True
self.batch_size = 128
self.Context_Cont = Context_Cont_configs()
self.TC = TC()
self.augmentation = augmentations()
class augmentations(object):
def __init__(self):
self.jitter_scale_ratio = 1.1
self.jitter_ratio = 0.8
self.max_seg = 8
class Context_Cont_configs(object):
def __init__(self):
self.temperature = 0.2
self.use_cosine_similarity = True
class TC(object):
def __init__(self):
self.hidden_dim = 100
self.timesteps = 6
================================================
FILE: ts_classification_methods/tstcc_cls/config_files/uea_Configs.py
================================================
class Config(object):
def __init__(self):
# model configs
self.input_channels = 9
self.kernel_size = 8
self.stride = 1
self.final_out_channels = 128
self.num_classes = 6
self.dropout = 0.35
self.features_len = 18
# training configs
self.num_epoch = 600
# optimizer parameters
self.beta1 = 0.9
self.beta2 = 0.99
self.lr = 3e-4
# data parameters
self.drop_last = True
self.batch_size = 128
self.Context_Cont = Context_Cont_configs()
self.TC = TC()
self.augmentation = augmentations()
class augmentations(object):
def __init__(self):
self.jitter_scale_ratio = 1.1
self.jitter_ratio = 0.8
self.max_seg = 8
class Context_Cont_configs(object):
def __init__(self):
self.temperature = 0.2
self.use_cosine_similarity = True
class TC(object):
def __init__(self):
self.hidden_dim = 100
self.timesteps = 6
================================================
FILE: ts_classification_methods/tstcc_cls/dataloader/augmentations.py
================================================
import numpy as np
import torch
def DataTransform(sample, config):
weak_aug = scaling(sample, config.augmentation.jitter_scale_ratio)
strong_aug = jitter(permutation(sample, max_segments=config.augmentation.max_seg), config.augmentation.jitter_ratio)
return weak_aug, strong_aug
def jitter(x, sigma=0.8):
# https://arxiv.org/pdf/1706.00527.pdf
return x + np.random.normal(loc=0., scale=sigma, size=x.shape)
def scaling(x, sigma=1.1):
# https://arxiv.org/pdf/1706.00527.pdf
factor = np.random.normal(loc=2., scale=sigma, size=(x.shape[0], x.shape[2]))
ai = []
for i in range(x.shape[1]):
xi = x[:, i, :]
ai.append(np.multiply(xi, factor[:, :])[:, np.newaxis, :])
return np.concatenate((ai), axis=1)
def permutation(x, max_segments=5, seg_mode="random"):
orig_steps = np.arange(x.shape[2])
num_segs = np.random.randint(1, max_segments, size=(x.shape[0]))
ret = np.zeros_like(x)
for i, pat in enumerate(x):
if num_segs[i] > 1:
if seg_mode == "random":
split_points = np.random.choice(x.shape[2] - 2, num_segs[i] - 1, replace=False)
split_points.sort()
splits = np.split(orig_steps, split_points)
else:
splits = np.array_split(orig_steps, num_segs[i])
warp = np.concatenate(np.random.permutation(splits)).ravel()
ret[i] = pat[0,warp]
else:
ret[i] = pat
return torch.from_numpy(ret)
================================================
FILE: ts_classification_methods/tstcc_cls/dataloader/dataloader.py
================================================
import os
import numpy as np
import torch
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
from .augmentations import DataTransform
class Load_Dataset(Dataset):
# Initialize your data, download, etc.
def __init__(self, dataset, config, training_mode):
super(Load_Dataset, self).__init__()
self.training_mode = training_mode
X_train = dataset["samples"]
y_train = dataset["labels"]
if len(X_train.shape) < 3:
X_train = X_train.unsqueeze(2)
if X_train.shape.index(min(X_train.shape)) != 1: # make sure the Channels in second dim
X_train = X_train.permute(0, 2, 1)
elif X_train.shape.index(min(
X_train.shape)) == 1 and config.input_channels > 1: # make sure part uea datasets the Channels in second dim
X_train = X_train.permute(0, 2, 1)
if isinstance(X_train, np.ndarray):
self.x_data = torch.from_numpy(X_train)
self.y_data = torch.from_numpy(y_train).long()
else:
self.x_data = X_train
self.y_data = y_train
self.len = X_train.shape[0]
if training_mode == "self_supervised": # no need to apply Augmentations in other modes
self.aug1, self.aug2 = DataTransform(self.x_data, config)
def __getitem__(self, index):
if self.training_mode == "self_supervised":
return self.x_data[index], self.y_data[index], self.aug1[index], self.aug2[index]
else:
return self.x_data[index], self.y_data[index], self.x_data[index], self.x_data[index]
def __len__(self):
return self.len
def data_generator(data_path, configs, training_mode):
train_dataset = torch.load(os.path.join(data_path, "train.pt"))
valid_dataset = torch.load(os.path.join(data_path, "val.pt"))
test_dataset = torch.load(os.path.join(data_path, "test.pt"))
print(type(train_dataset["samples"]))
print("Data shape = ", train_dataset["samples"].shape, valid_dataset["samples"].shape,
test_dataset["samples"].shape)
train_dataset = Load_Dataset(train_dataset, configs, training_mode)
valid_dataset = Load_Dataset(valid_dataset, configs, training_mode)
test_dataset = Load_Dataset(test_dataset, configs, training_mode)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=configs.batch_size,
shuffle=True, drop_last=configs.drop_last,
num_workers=0)
valid_loader = torch.utils.data.DataLoader(dataset=valid_dataset, batch_size=configs.batch_size,
shuffle=False, drop_last=configs.drop_last,
num_workers=0)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=configs.batch_size,
shuffle=False, drop_last=False,
num_workers=0)
return train_loader, valid_loader, test_loader
================================================
FILE: ts_classification_methods/tstcc_cls/main.py
================================================
import torch
import os
import numpy as np
from datetime import datetime
import argparse
from tstcc_cls.utils import _logger, set_requires_grad
from tstcc_cls.dataloader.dataloader import data_generator
from tstcc_cls.trainer.trainer import Trainer, model_evaluate
from tstcc_cls.models.TC import TC
from tstcc_cls.utils import _calc_metrics, copy_Files
from tstcc_cls.models.model import base_Model
# Args selections
start_time = datetime.now()
parser = argparse.ArgumentParser()
######################## Model parameters ########################
home_dir = os.getcwd()
parser.add_argument('--experiment_description', default='Exp1', type=str,
help='Experiment Description')
parser.add_argument('--run_description', default='run1', type=str,
help='Experiment Description')
parser.add_argument('--seed', default=0, type=int,
help='seed value')
parser.add_argument('--training_mode', default='supervised', type=str,
help='Modes of choice: random_init, supervised, self_supervised, fine_tune, train_linear')
parser.add_argument('--selected_dataset', default='Epilepsy', type=str,
help='Dataset of choice: sleepEDF, HAR, Epilepsy, pFD')
parser.add_argument('--logs_save_dir', default='experiments_logs', type=str,
help='saving directory')
parser.add_argument('--device', default='cuda', type=str,
help='cpu or cuda')
parser.add_argument('--home_path', default=home_dir, type=str,
help='Project home directory')
args = parser.parse_args()
device = torch.device(args.device)
experiment_description = args.experiment_description
data_type = args.selected_dataset
method = 'TS-TCC'
training_mode = args.training_mode
run_description = args.run_description
logs_save_dir = args.logs_save_dir
os.makedirs(logs_save_dir, exist_ok=True)
exec(f'from config_files.{data_type}_Configs import Config as Configs')
configs = Configs()
# ##### fix random seeds for reproducibility ########
SEED = args.seed
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.benchmark = False
np.random.seed(SEED)
#####################################################
experiment_log_dir = os.path.join(logs_save_dir, experiment_description, run_description, training_mode + f"_seed_{SEED}")
os.makedirs(experiment_log_dir, exist_ok=True)
# loop through domains
counter = 0
src_counter = 0
# Logging
log_file_name = os.path.join(experiment_log_dir, f"logs_{datetime.now().strftime('%d_%m_%Y_%H_%M_%S')}.log")
logger = _logger(log_file_name)
logger.debug("=" * 45)
logger.debug(f'Dataset: {data_type}')
logger.debug(f'Method: {method}')
logger.debug(f'Mode: {training_mode}')
logger.debug("=" * 45)
# Load datasets
data_path = f"./data/{data_type}"
train_dl, valid_dl, test_dl = data_generator(data_path, configs, training_mode)
logger.debug("Data loaded ...")
# Load Model
model = base_Model(configs).to(device)
temporal_contr_model = TC(configs, device).to(device)
if training_mode == "fine_tune":
# load saved model of this experiment
load_from = os.path.join(os.path.join(logs_save_dir, experiment_description, run_description, f"self_supervised_seed_{SEED}", "saved_models"))
chkpoint = torch.load(os.path.join(load_from, "ckp_last.pt"), map_location=device)
pretrained_dict = chkpoint["model_state_dict"]
model_dict = model.state_dict()
del_list = ['logits']
pretrained_dict_copy = pretrained_dict.copy()
for i in pretrained_dict_copy.keys():
for j in del_list:
if j in i:
del pretrained_dict[i]
model_dict.update(pretrained_dict)
model.load_state_dict(model_dict)
if training_mode == "train_linear" or "tl" in training_mode:
load_from = os.path.join(os.path.join(logs_save_dir, experiment_description, run_description, f"self_supervised_seed_{SEED}", "saved_models"))
chkpoint = torch.load(os.path.join(load_from, "ckp_last.pt"), map_location=device)
pretrained_dict = chkpoint["model_state_dict"]
model_dict = model.state_dict()
# 1. filter out unnecessary keys
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
# delete these parameters (Ex: the linear layer at the end)
del_list = ['logits']
pretrained_dict_copy = pretrained_dict.copy()
for i in pretrained_dict_copy.keys():
for j in del_list:
if j in i:
del pretrained_dict[i]
model_dict.update(pretrained_dict)
model.load_state_dict(model_dict)
set_requires_grad(model, pretrained_dict, requires_grad=False) # Freeze everything except last layer.
if training_mode == "random_init":
model_dict = model.state_dict()
# delete all the parameters except for logits
del_list = ['logits']
pretrained_dict_copy = model_dict.copy()
for i in pretrained_dict_copy.keys():
for j in del_list:
if j in i:
del model_dict[i]
set_requires_grad(model, model_dict, requires_grad=False) # Freeze everything except last layer.
model_optimizer = torch.optim.Adam(model.parameters(), lr=configs.lr, betas=(configs.beta1, configs.beta2), weight_decay=3e-4)
temporal_contr_optimizer = torch.optim.Adam(temporal_contr_model.parameters(), lr=configs.lr, betas=(configs.beta1, configs.beta2), weight_decay=3e-4)
if training_mode == "self_supervised": # to do it only once
copy_Files(os.path.join(logs_save_dir, experiment_description, run_description), data_type)
# Trainer
Trainer(model, temporal_contr_model, model_optimizer, temporal_contr_optimizer, train_dl, valid_dl, test_dl, device, logger, configs, experiment_log_dir, training_mode)
if training_mode != "self_supervised":
# Testing
outs = model_evaluate(model, temporal_contr_model, test_dl, device, training_mode)
total_loss, total_acc, pred_labels, true_labels = outs
_calc_metrics(pred_labels, true_labels, experiment_log_dir, args.home_path)
logger.debug(f"Training time is : {datetime.now()-start_time}")
================================================
FILE: ts_classification_methods/tstcc_cls/main_ucr.py
================================================
import argparse
import os
import sys
import time
from datetime import datetime
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import numpy as np
import torch
from data.preprocessing import load_data, k_fold, normalize_per_series, fill_nan_value, transfer_labels
from tsm_utils import save_cls_result, set_seed
from tstcc_cls.models.TC import TC
from tstcc_cls.models.model import base_Model
from tstcc_cls.trainer.trainer import Trainer_cls
from tstcc_cls.utils import _logger, generator_ucr, generator_ucr_config
# Args selections
start_time = datetime.now()
parser = argparse.ArgumentParser()
######################## Model parameters ########################
home_dir = os.getcwd()
parser.add_argument('--experiment_description', default='Exp1', type=str,
help='Experiment Description')
parser.add_argument('--run_description', default='run1', type=str,
help='Experiment Description')
parser.add_argument('--seed', default=42, type=int,
help='seed value')
parser.add_argument('--random_seed', type=int, default=42, help='The random seed')
parser.add_argument('--training_mode', default='self_supervised', type=str,
help='Modes of choice: random_init, supervised, self_supervised, fine_tune, train_linear')
parser.add_argument('--selected_dataset', default='ucr', type=str,
help='Dataset of choice: sleepEDF, HAR, Epilepsy, pFD') ## HAR
parser.add_argument('--dataset', default='CBF', type=str,
help='Dataset of choice: sleepEDF, HAR, Epilepsy, pFD')
parser.add_argument('--logs_save_dir', default='experiments_logs', type=str,
help='saving directory')
parser.add_argument('--device', default='cuda:0', type=str,
help='cpu or cuda')
parser.add_argument('--home_path', default=home_dir, type=str,
help='Project home directory')
parser.add_argument('--save_csv_name', type=str, default='test_tstcc_ucr_0424_')
parser.add_argument('--save_dir', type=str, default='/SSD/lz/time_tsm/tstcc_cls/result')
args = parser.parse_args()
set_seed(args)
device = torch.device(args.device)
experiment_description = args.experiment_description
data_type = args.selected_dataset
method = 'TS-TCC'
training_mode = args.training_mode
run_description = args.run_description
logs_save_dir = args.logs_save_dir
os.makedirs(logs_save_dir, exist_ok=True)
exec(f'from config_files.{data_type}_Configs import Config as Configs')
configs = Configs()
# ##### fix random seeds for reproducibility ########
SEED = args.seed
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.benchmark = False
np.random.seed(SEED)
#####################################################
experiment_log_dir = os.path.join(logs_save_dir, experiment_description, run_description,
training_mode + f"_seed_{SEED}")
os.makedirs(experiment_log_dir, exist_ok=True)
# loop through domains
counter = 0
src_counter = 0
# Logging
log_file_name = os.path.join(experiment_log_dir, f"logs_{datetime.now().strftime('%d_%m_%Y_%H_%M_%S')}.log")
logger = _logger(log_file_name)
logger.debug("=" * 45)
logger.debug(f'Dataset: {data_type}')
logger.debug(f'Method: {method}')
logger.debug(f'Mode: {training_mode}')
logger.debug("=" * 45)
# Load datasets
data_path = f"./data/{data_type}"
sum_dataset, sum_target, num_classes = load_data(
dataroot='/SSD/lz/UCRArchive_2018',
dataset=args.dataset)
sum_target = transfer_labels(sum_target)
# sum_dataset = sum_dataset[..., np.newaxis]
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = k_fold(
sum_dataset, sum_target)
# print("Start features_len = ", configs.features_len, ", num_classes = ", configs.num_classes)
generator_ucr_config(data=train_datasets[0], label=train_targets[0], configs=configs)
if train_datasets[0].shape[1] < 30:
configs.TC.timesteps = 1
# print("End features_len = ", configs.features_len, ", num_classes = ", configs.num_classes)
# train_dl, valid_dl, test_dl = data_generator(data_path, configs, training_mode)
train_accuracies = []
val_accuracies = []
test_accuracies = []
t = time.time()
for i in range(5):
### mean impute
train_data, val_data, test_data = fill_nan_value(train_datasets[i], val_datasets[i], test_datasets[i])
### normalize
train_data = normalize_per_series(train_data)
val_data = normalize_per_series(val_data)
test_data = normalize_per_series(test_data)
train_data = train_data[..., np.newaxis]
val_data = val_data[..., np.newaxis]
test_data = test_data[..., np.newaxis]
train_dl = generator_ucr(data=train_data, label=train_targets[i],
configs=configs, training_mode='self_supervised', drop_last=True)
valid_dl = generator_ucr(data=val_data, label=val_targets[i],
configs=configs, training_mode='self_supervised', drop_last=False)
test_dl = generator_ucr(data=test_data, label=test_targets[i],
configs=configs, training_mode='self_supervised', drop_last=False)
logger.debug("Data loaded ...")
# Load Model
model = base_Model(configs).to(device)
temporal_contr_model = TC(configs, device).to(device)
model_optimizer = torch.optim.Adam(model.parameters(), lr=configs.lr, betas=(configs.beta1, configs.beta2),
weight_decay=3e-4)
temporal_contr_optimizer = torch.optim.Adam(temporal_contr_model.parameters(), lr=configs.lr,
betas=(configs.beta1, configs.beta2), weight_decay=3e-4)
# copy_Files(os.path.join(logs_save_dir, experiment_description, run_description), data_type) # to do it only once
# self_supervised Trainer
Trainer_cls(model, temporal_contr_model, model_optimizer, temporal_contr_optimizer, train_dl, valid_dl,
test_dl, device, logger, configs, experiment_log_dir, training_mode='self_supervised')
print("Self_supervised end, start fine_tune!")
# fine_tune Trainer
train_dl = generator_ucr(data=train_data, label=train_targets[i],
configs=configs, training_mode='fine_tune', drop_last=True)
valid_dl = generator_ucr(data=val_data, label=val_targets[i],
configs=configs, training_mode='fine_tune', drop_last=False)
test_dl = generator_ucr(data=test_data, label=test_targets[i],
configs=configs, training_mode='fine_tune', drop_last=False)
train_acc, val_acc, test_acc = Trainer_cls(model, temporal_contr_model, model_optimizer, temporal_contr_optimizer,
train_dl, valid_dl,
test_dl, device, logger, configs, experiment_log_dir,
training_mode='fine_tune')
# print(type(train_acc.data), train_acc.numpy(), val_acc.numpy(), test_acc.numpy())
# train_accuracies = torch.Tensor(train_accuracies)
# test_accuracies = torch.Tensor(test_accuracies)
train_accuracies.append(train_acc.item())
val_accuracies.append(val_acc.item())
test_accuracies.append(test_acc.item())
train_time = time.time() - t
print("train_accuracies = ", train_accuracies, len(train_accuracies))
test_accuracies = torch.Tensor(test_accuracies)
save_cls_result(args, test_accu=torch.mean(test_accuracies), test_std=torch.std(test_accuracies),
train_time=train_time / 5, end_val_epoch=0.0, seeds=args.seed)
logger.debug(f"Training time is : {datetime.now() - start_time}")
================================================
FILE: ts_classification_methods/tstcc_cls/main_uea.py
================================================
import argparse
import os
import sys
import time
from datetime import datetime
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)
import numpy as np
import torch
from data.preprocessing import k_fold, load_UEA, fill_nan_value, normalize_uea_set
from tsm_utils import save_cls_result, set_seed
from tstcc_cls.models.TC import TC
from tstcc_cls.models.model import base_Model
from tstcc_cls.trainer.trainer import Trainer_cls
from tstcc_cls.utils import _logger, generator_uea_config, generator_uea
# Args selections
start_time = datetime.now()
parser = argparse.ArgumentParser()
######################## Model parameters ########################
home_dir = os.getcwd()
parser.add_argument('--experiment_description', default='Exp1', type=str,
help='Experiment Description')
parser.add_argument('--run_description', default='run1', type=str,
help='Experiment Description')
parser.add_argument('--seed', default=42, type=int,
help='seed value')
parser.add_argument('--random_seed', type=int, default=42, help='The random seed')
parser.add_argument('--training_mode', default='self_supervised', type=str,
help='Modes of choice: random_init, supervised, self_supervised, fine_tune, train_linear')
parser.add_argument('--selected_dataset', default='uea', type=str,
help='Dataset of choice: sleepEDF, HAR, Epilepsy, pFD') ## HAR
parser.add_argument('--dataset', default='EigenWorms', type=str,
help='Dataset of choice: sleepEDF, HAR, Epilepsy, pFD')
parser.add_argument('--logs_save_dir', default='experiments_logs', type=str,
help='saving directory')
parser.add_argument('--device', default='cuda:1', type=str,
help='cpu or cuda')
parser.add_argument('--home_path', default=home_dir, type=str,
help='Project home directory')
parser.add_argument('--save_csv_name', type=str, default='test_tstcc_uea_0425_')
parser.add_argument('--save_dir', type=str, default='/SSD/lz/time_tsm/tstcc_cls/result')
args = parser.parse_args()
set_seed(args)
device = torch.device(args.device)
experiment_description = args.experiment_description
data_type = args.selected_dataset
method = 'TS-TCC'
training_mode = args.training_mode
run_description = args.run_description
logs_save_dir = args.logs_save_dir
os.makedirs(logs_save_dir, exist_ok=True)
exec(f'from config_files.{data_type}_Configs import Config as Configs')
configs = Configs()
# ##### fix random seeds for reproducibility ########
SEED = args.seed
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.benchmark = False
np.random.seed(SEED)
#####################################################
experiment_log_dir = os.path.join(logs_save_dir, experiment_description, run_description,
training_mode + f"_seed_{SEED}")
os.makedirs(experiment_log_dir, exist_ok=True)
# loop through domains
counter = 0
src_counter = 0
# Logging
log_file_name = os.path.join(experiment_log_dir, f"logs_{datetime.now().strftime('%d_%m_%Y_%H_%M_%S')}.log")
logger = _logger(log_file_name)
logger.debug("=" * 45)
logger.debug(f'Dataset: {data_type}')
logger.debug(f'Method: {method}')
logger.debug(f'Mode: {training_mode}')
logger.debug("=" * 45)
# Load datasets
data_path = f"./data/{data_type}"
sum_dataset, sum_target, num_classes = load_UEA(
dataroot='/SSD/lz/Multivariate2018_arff',
dataset=args.dataset)
# sum_dataset = sum_dataset[..., np.newaxis]
train_datasets, train_targets, val_datasets, val_targets, test_datasets, test_targets = k_fold(
sum_dataset, sum_target)
# print("Start features_len = ", configs.features_len, ", num_classes = ", configs.num_classes)
generator_uea_config(data=train_datasets[0], label=train_targets[0], configs=configs)
if args.dataset == 'EigenWorms':
configs.augmentation.max_seg = 5
configs.batch_size = 8
if train_datasets[0].shape[1] <= 30:
configs.TC.timesteps = 1
# print("End features_len = ", configs.features_len, ", num_classes = ", configs.num_classes, ", input_channels = ",
# configs.input_channels)
# train_dl, valid_dl, test_dl = data_generator(data_path, configs, training_mode)
train_accuracies = []
val_accuracies = []
test_accuracies = []
t = time.time()
for i in range(5):
### mean impute
train_data, val_data, test_data = fill_nan_value(train_datasets[i], val_datasets[i], test_datasets[i])
### normalize
train_data = normalize_uea_set(train_data)
val_data = normalize_uea_set(val_data)
test_data = normalize_uea_set(test_data)
# train_data = train_data[..., np.newaxis]
# val_data = val_data[..., np.newaxis]
# test_data = test_data[..., np.newaxis]
train_dl = generator_uea(data=train_data, label=train_targets[i],
configs=configs, training_mode='self_supervised', drop_last=True)
valid_dl = generator_uea(data=val_data, label=val_targets[i],
configs=configs, training_mode='self_supervised', drop_last=False)
test_dl = generator_uea(data=test_data, label=test_targets[i],
configs=configs, training_mode='self_supervised', drop_last=False)
logger.debug("Data loaded ...")
# Load Model
model = base_Model(configs).to(device)
temporal_contr_model = TC(configs, device).to(device)
model_optimizer = torch.optim.Adam(model.parameters(), lr=configs.lr, betas=(configs.beta1, configs.beta2),
weight_decay=3e-4)
temporal_contr_optimizer = torch.optim.Adam(temporal_contr_model.parameters(), lr=configs.lr,
betas=(configs.beta1, configs.beta2), weight_decay=3e-4)
# copy_Files(os.path.join(logs_save_dir, experiment_description, run_description), data_type) # to do it only once
# self_supervised Trainer
Trainer_cls(model, temporal_contr_model, model_optimizer, temporal_contr_optimizer, train_dl, valid_dl,
test_dl, device, logger, configs, experiment_log_dir, training_mode='self_supervised')
print("Self_supervised end, start fine_tune!")
# fine_tune Trainer
train_dl = generator_uea(data=train_data, label=train_targets[i],
configs=configs, training_mode='fine_tune', drop_last=True)
valid_dl = generator_uea(data=val_data, label=val_targets[i],
configs=configs, training_mode='fine_tune', drop_last=False)
test_dl = generator_uea(data=test_data, label=test_targets[i],
configs=configs, training_mode='fine_tune', drop_last=False)
train_acc, val_acc, test_acc = Trainer_cls(model, temporal_contr_model, model_optimizer, temporal_contr_optimizer,
train_dl, valid_dl,
test_dl, device, logger, configs, experiment_log_dir,
training_mode='fine_tune')
# print(type(train_acc.data), train_acc.numpy(), val_acc.numpy(), test_acc.numpy())
# train_accuracies = torch.Tensor(train_accuracies)
# test_accuracies = torch.Tensor(test_accuracies)
train_accuracies.append(train_acc.item())
val_accuracies.append(val_acc.item())
test_accuracies.append(test_acc.item())
train_time = time.time() - t
print("train_accuracies = ", train_accuracies, len(train_accuracies))
test_accuracies = torch.Tensor(test_accuracies)
save_cls_result(args, test_accu=torch.mean(test_accuracies), test_std=torch.std(test_accuracies),
train_time=train_time / 5, end_val_epoch=0.0, seeds=args.seed)
logger.debug(f"Training time is : {datetime.now() - start_time}")
================================================
FILE: ts_classification_methods/tstcc_cls/models/TC.py
================================================
import torch
import torch.nn as nn
import numpy as np
from .attention import Seq_Transformer
class TC(nn.Module):
def __init__(self, configs, device):
super(TC, self).__init__()
self.num_channels = configs.final_out_channels
self.timestep = configs.TC.timesteps
self.Wk = nn.ModuleList([nn.Linear(configs.TC.hidden_dim, self.num_channels) for i in range(self.timestep)])
self.lsoftmax = nn.LogSoftmax()
self.device = device
self.projection_head = nn.Sequential(
nn.Linear(configs.TC.hidden_dim, configs.final_out_channels // 2),
nn.BatchNorm1d(configs.final_out_channels // 2),
nn.ReLU(inplace=True),
nn.Linear(configs.final_out_channels // 2, configs.final_out_channels // 4),
)
self.seq_transformer = Seq_Transformer(patch_size=self.num_channels, dim=configs.TC.hidden_dim, depth=4, heads=4, mlp_dim=64)
def forward(self, features_aug1, features_aug2):
z_aug1 = features_aug1 # features are (batch_size, #channels, seq_len)
seq_len = z_aug1.shape[2]
z_aug1 = z_aug1.transpose(1, 2)
z_aug2 = features_aug2
z_aug2 = z_aug2.transpose(1, 2)
batch = z_aug1.shape[0]
t_samples = torch.randint(seq_len - self.timestep, size=(1,)).long().to(self.device) # randomly pick time stamps
nce = 0 # average over timestep and batch
encode_samples = torch.empty((self.timestep, batch, self.num_channels)).float().to(self.device)
for i in np.arange(1, self.timestep + 1):
encode_samples[i - 1] = z_aug2[:, t_samples + i, :].view(batch, self.num_channels)
forward_seq = z_aug1[:, :t_samples + 1, :]
c_t = self.seq_transformer(forward_seq)
pred = torch.empty((self.timestep, batch, self.num_channels)).float().to(self.device)
for i in np.arange(0, self.timestep):
linear = self.Wk[i]
pred[i] = linear(c_t)
for i in np.arange(0, self.timestep):
total = torch.mm(encode_samples[i], torch.transpose(pred[i], 0, 1))
nce += torch.sum(torch.diag(self.lsoftmax(total)))
nce /= -1. * batch * self.timestep
return nce, self.projection_head(c_t)
================================================
FILE: ts_classification_methods/tstcc_cls/models/attention.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
from einops import rearrange, repeat
########################################################################################
class Residual(nn.Module):
def __init__(self, fn):
super().__init__()
self.fn = fn
def forward(self, x, **kwargs):
return self.fn(x, **kwargs) + x
class PreNorm(nn.Module):
def __init__(self, dim, fn):
super().__init__()
self.norm = nn.LayerNorm(dim)
self.fn = fn
def forward(self, x, **kwargs):
return self.fn(self.norm(x), **kwargs)
class FeedForward(nn.Module):
def __init__(self, dim, hidden_dim, dropout=0.):
super().__init__()
self.net = nn.Sequential(
nn.Linear(dim, hidden_dim),
nn.ReLU(),
nn.Dropout(dropout),
nn.Linear(hidden_dim, dim),
nn.Dropout(dropout)
)
def forward(self, x):
return self.net(x)
class Attention(nn.Module):
def __init__(self, dim, heads=8, dropout=0.):
super().__init__()
self.heads = heads
self.scale = dim ** -0.5
self.to_qkv = nn.Linear(dim, dim * 3, bias=False)
self.to_out = nn.Sequential(
nn.Linear(dim, dim),
nn.Dropout(dropout)
)
def forward(self, x, mask=None):
b, n, _, h = *x.shape, self.heads
qkv = self.to_qkv(x).chunk(3, dim=-1)
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), qkv)
dots = torch.einsum('bhid,bhjd->bhij', q, k) * self.scale
if mask is not None:
mask = F.pad(mask.flatten(1), (1, 0), value=True)
assert mask.shape[-1] == dots.shape[-1], 'mask has incorrect dimensions'
mask = mask[:, None, :] * mask[:, :, None]
dots.masked_fill_(~mask, float('-inf'))
del mask
attn = dots.softmax(dim=-1)
out = torch.einsum('bhij,bhjd->bhid', attn, v)
out = rearrange(out, 'b h n d -> b n (h d)')
out = self.to_out(out)
return out
class Transformer(nn.Module):
def __init__(self, dim, depth, heads, mlp_dim, dropout):
super().__init__()
self.layers = nn.ModuleList([])
for _ in range(depth):
self.layers.append(nn.ModuleList([
Residual(PreNorm(dim, Attention(dim, heads=heads, dropout=dropout))),
Residual(PreNorm(dim, FeedForward(dim, mlp_dim, dropout=dropout)))
]))
def forward(self, x, mask=None):
for attn, ff in self.layers:
x = attn(x, mask=mask)
x = ff(x)
return x
class Seq_Transformer(nn.Module):
def __init__(self, *, patch_size, dim, depth, heads, mlp_dim, channels=1, dropout=0.1):
super().__init__()
patch_dim = channels * patch_size
self.patch_to_embedding = nn.Linear(patch_dim, dim)
self.c_token = nn.Parameter(torch.randn(1, 1, dim))
self.transformer = Transformer(dim, depth, heads, mlp_dim, dropout)
self.to_c_token = nn.Identity()
def forward(self, forward_seq):
x = self.patch_to_embedding(forward_seq)
b, n, _ = x.shape
c_tokens = repeat(self.c_token, '() n d -> b n d', b=b)
x = torch.cat((c_tokens, x), dim=1)
x = self.transformer(x)
c_t = self.to_c_token(x[:, 0])
return c_t
================================================
FILE: ts_classification_methods/tstcc_cls/models/loss.py
================================================
import torch
import numpy as np
class NTXentLoss(torch.nn.Module):
def __init__(self, device, batch_size, temperature, use_cosine_similarity):
super(NTXentLoss, self).__init__()
self.batch_size = batch_size
self.temperature = temperature
self.device = device
self.softmax = torch.nn.Softmax(dim=-1)
self.mask_samples_from_same_repr = self._get_correlated_mask().type(torch.bool)
self.similarity_function = self._get_similarity_function(use_cosine_similarity)
self.criterion = torch.nn.CrossEntropyLoss(reduction="sum")
def _get_similarity_function(self, use_cosine_similarity):
if use_cosine_similarity:
self._cosine_similarity = torch.nn.CosineSimilarity(dim=-1)
return self._cosine_simililarity
else:
return self._dot_simililarity
def _get_correlated_mask(self):
diag = np.eye(2 * self.batch_size)
l1 = np.eye((2 * self.batch_size), 2 * self.batch_size, k=-self.batch_size)
l2 = np.eye((2 * self.batch_size), 2 * self.batch_size, k=self.batch_size)
mask = torch.from_numpy((diag + l1 + l2))
mask = (1 - mask).type(torch.bool)
return mask.to(self.device)
@staticmethod
def _dot_simililarity(x, y):
v = torch.tensordot(x.unsqueeze(1), y.T.unsqueeze(0), dims=2)
# x shape: (N, 1, C)
# y shape: (1, C, 2N)
# v shape: (N, 2N)
return v
def _cosine_simililarity(self, x, y):
# x shape: (N, 1, C)
# y shape: (1, 2N, C)
# v shape: (N, 2N)
v = self._cosine_similarity(x.unsqueeze(1), y.unsqueeze(0))
return v
def forward(self, zis, zjs):
representations = torch.cat([zjs, zis], dim=0)
similarity_matrix = self.similarity_function(representations, representations)
# filter out the scores from the positive samples
l_pos = torch.diag(similarity_matrix, self.batch_size)
r_pos = torch.diag(similarity_matrix, -self.batch_size)
positives = torch.cat([l_pos, r_pos]).view(2 * self.batch_size, 1)
negatives = similarity_matrix[self.mask_samples_from_same_repr].view(2 * self.batch_size, -1)
logits = torch.cat((positives, negatives), dim=1)
logits /= self.temperature
labels = torch.zeros(2 * self.batch_size).to(self.device).long()
loss = self.criterion(logits, labels)
return loss / (2 * self.batch_size)
================================================
FILE: ts_classification_methods/tstcc_cls/models/model.py
================================================
from torch import nn
class base_Model(nn.Module):
def __init__(self, configs):
super(base_Model, self).__init__()
self.conv_block1 = nn.Sequential(
nn.Conv1d(configs.input_channels, 32, kernel_size=configs.kernel_size,
stride=configs.stride, bias=False, padding=(configs.kernel_size//2)),
nn.BatchNorm1d(32),
nn.ReLU(),
nn.MaxPool1d(kernel_size=2, stride=2, padding=1),
nn.Dropout(configs.dropout)
)
self.conv_block2 = nn.Sequential(
nn.Conv1d(32, 64, kernel_size=8, stride=1, bias=False, padding=4),
nn.BatchNorm1d(64),
nn.ReLU(),
nn.MaxPool1d(kernel_size=2, stride=2, padding=1)
)
self.conv_block3 = nn.Sequential(
nn.Conv1d(64, configs.final_out_channels, kernel_size=8, stride=1, bias=False, padding=4),
nn.BatchNorm1d(configs.final_out_channels),
nn.ReLU(),
nn.MaxPool1d(kernel_size=2, stride=2, padding=1),
)
model_output_dim = configs.features_len
self.logits = nn.Linear(model_output_dim * configs.final_out_channels, configs.num_classes)
def forward(self, x_in):
x = self.conv_block1(x_in)
x = self.conv_block2(x)
x = self.conv_block3(x)
x_flat = x.reshape(x.shape[0], -1)
logits = self.logits(x_flat)
return logits, x
================================================
FILE: ts_classification_methods/tstcc_cls/result/tstcc_0327_cls_result.csv
================================================
id,dataset_name,test_accuracy,test_std,val_accuracy,val_std,train_accuracy,train_std
0,ACSF1,0.775,0.0631,0.605,0.0622,0.545,0.0326
1,Adiac,0.6396,0.0847,0.5136,0.1267,0.42,0.1494
2,AllGestureWiimoteX,0.9602,0.0207,0.7524,0.0441,0.7311,0.0156
3,AllGestureWiimoteY,0.9887,0.0061,0.7964,0.0199,0.7592,0.0235
4,AllGestureWiimoteZ,0.9195,0.0121,0.6970000000000001,0.0369,0.6619,0.0402
5,ArrowHead,0.8781,0.0487,0.8689,0.0773,0.8296,0.0447
6,BME,1.0,0.0,1.0,0.0,0.9833,0.0152
7,Beef,0.6938,0.2374,0.7,0.2327,0.6167,0.1918
8,BeetleFly,0.875,0.2119,0.85,0.0559,0.775,0.1046
9,BirdChicken,0.625,0.0884,0.775,0.1046,0.625,0.1768
10,CBF,0.9918,0.0067,1.0,0.0,0.9959,0.0071
11,Car,0.8781,0.0777,0.8333,0.1021,0.7167,0.0903
12,Chinatown,0.925,0.0711,0.9863,0.0137,0.9589,0.0168
13,ChlorineConcentration,0.8224,0.0745,0.8707,0.0967,0.8708,0.0886
14,CinCECGTorso,0.9966,0.0045,1.0,0.0,0.9984,0.0035
15,Coffee,0.975,0.0261,0.9667,0.0745,0.9455,0.122
16,Computers,0.7242,0.069,0.64,0.0534,0.616,0.0586
17,CricketX,0.9885,0.006,0.8144,0.0222,0.7398,0.0312
18,CricketY,0.9901,0.0081,0.7881,0.0364,0.7398,0.044
19,CricketZ,0.9896,0.0135,0.7931,0.0385,0.746,0.0428
20,Crop,0.9055,0.0058,0.7884,0.0041,0.7849,0.0049
21,DiatomSizeReduction,0.9656,0.0423,0.9846,0.0188,0.9752,0.0235
22,DistalPhalanxOutlineAgeGroup,0.8352,0.0209,0.85,0.0152,0.8125,0.0616
23,DistalPhalanxOutlineCorrect,0.8188,0.0215,0.8011,0.0204,0.7497,0.0523
24,DistalPhalanxTW,0.8086,0.0135,0.8148,0.0131,0.7736,0.0353
25,DodgerLoopDay,0.8062,0.0915,0.65,0.0677,0.5194,0.0572
26,DodgerLoopGame,0.975,0.0178,0.9187,0.0568,0.8919,0.0439
27,DodgerLoopWeekend,0.9969,0.006999999999999999,0.975,0.0559,0.955,0.054000000000000006
28,ECG200,0.9031,0.0639,0.865,0.0548,0.845,0.0597
29,ECG5000,0.9696,0.0038,0.9582,0.0056,0.9548,0.0075
30,ECGFiveDays,0.9937,0.004,1.0,0.0,0.9956,0.0058
31,EOGHorizontalSignal,0.9599,0.0105,0.7643,0.0477,0.6314,0.1079
32,EOGVerticalSignal,0.9339,0.0194,0.7233,0.0313,0.6072,0.0428
33,Earthquakes,0.7969,0.1324,0.8125,0.0126,0.7657,0.0249
34,ElectricDevices,0.9396,0.004,0.8748,0.0027,0.8687,0.0039
35,EthanolLevel,0.4219,0.1368,0.3994,0.1388,0.3571,0.1544
36,FaceAll,1.0,0.0,0.9918,0.0025,0.9883,0.006999999999999999
37,FaceFour,0.975,0.0392,0.9739,0.0238,0.9375,0.0758
38,FacesUCR,1.0,0.0,0.9945,0.0035,0.9934,0.0043
39,FiftyWords,0.9984,0.0021,0.7995,0.0183,0.7732,0.0498
40,Fish,0.9219,0.0475,0.8629,0.048,0.8171,0.0383
41,FordA,0.9601,0.0121,0.9364,0.0073,0.9339,0.0057
42,FordB,0.9262,0.0092,0.9151,0.0121,0.9105,0.008
43,FreezerRegularTrain,0.9878,0.0072,0.9994,0.0009,0.9974,0.0029
44,FreezerSmallTrain,0.9864,0.0045,0.9997,0.0007,0.9972,0.002
45,Fungi,0.9875,0.0204,1.0,0.0,0.9705,0.0207
46,GestureMidAirD1,0.8938,0.0232,0.6147,0.0366,0.618,0.0548
47,GestureMidAirD2,0.9031,0.0458,0.5353,0.0369,0.4943,0.0612
48,GestureMidAirD3,0.7641,0.08199999999999999,0.3765,0.0246,0.2959,0.048
49,GesturePebbleZ1,0.9969,0.0043,0.9311,0.0214,0.9208,0.0532
50,GesturePebbleZ2,0.9984,0.0035,0.9279,0.034,0.8883,0.0453
51,GunPoint,0.975,0.036000000000000004,0.995,0.0112,0.955,0.0411
52,GunPointAgeSpan,0.932,0.0556,0.9758,0.018000000000000002,0.9423,0.0215
53,GunPointMaleVersusFemale,0.9719,0.0216,0.9978,0.0049,0.9845,0.0061
54,GunPointOldVersusYoung,0.9688,0.01,0.967,0.0077,0.949,0.033
55,Ham,0.925,0.0704,0.8837,0.0368,0.8272,0.0686
56,HandOutlines,0.9044,0.0121,0.924,0.0157,0.8896,0.0235
57,Haptics,0.718,0.0537,0.5118,0.0345,0.527,0.0278
58,Herring,0.6875,0.0834,0.7769,0.0918,0.5228,0.0507
59,HouseTwenty,0.9906,0.013999999999999999,0.8938,0.0474,0.8931,0.0521
60,InlineSkate,0.8490000000000001,0.1152,0.7344,0.035,0.4828,0.2514
61,InsectEPGRegularTrain,0.9969,0.0043,0.895,0.0285,0.8748,0.044000000000000004
62,InsectEPGSmallTrain,0.9563,0.0251,0.8213,0.0381,0.8008,0.0307
63,InsectWingbeatSound,0.8664,0.0328,0.7403,0.0088,0.7114,0.0256
64,ItalyPowerDemand,0.9547,0.0177,0.9767,0.0068,0.964,0.0185
65,LargeKitchenAppliances,0.9271,0.0214,0.7959999999999999,0.048,0.7008,0.0477
66,Lightning2,0.7406,0.0559,0.8063,0.0823,0.7437,0.0749
67,Lightning7,0.9187,0.0461,0.7793,0.0393,0.735,0.1006
68,Mallat,0.9912,0.003,0.9913,0.006999999999999999,0.9874,0.0068
69,Meat,0.4406,0.2404,0.5167,0.2545,0.5083,0.2981
70,MedicalImages,0.9172,0.0195,0.843,0.015,0.8082,0.0297
71,MelbournePedestrian,0.0996,0.0006,0.1051,0.0002,0.1012,0.0012
72,MiddlePhalanxOutlineAgeGroup,0.7453,0.0306,0.7838,0.0064,0.7562,0.0419
73,MiddlePhalanxOutlineCorrect,0.8336,0.0208,0.8502,0.0288,0.7903,0.0257
74,MiddlePhalanxTW,0.6461,0.0273,0.6486,0.0377,0.6329,0.0295
75,MixedShapesRegularTrain,0.9988,0.0019,0.9443,0.0065,0.9361,0.008
76,MixedShapesSmallTrain,0.9902,0.0109,0.9342,0.0239,0.9264,0.0157
77,MoteStrain,0.9828,0.0109,0.9601,0.013999999999999999,0.9434,0.0222
78,NonInvasiveFetalECGThorax1,0.9821,0.0063,0.9287,0.0047,0.9245,0.0086
79,NonInvasiveFetalECGThorax2,0.9856,0.0049,0.9377,0.0073,0.9348,0.0038
80,OSULeaf,0.982,0.0245,0.7371,0.0682,0.6606,0.0348
81,OliveOil,0.2625,0.114,0.4667,0.1118,0.4167,0.0
82,PLAID,0.6081,0.0191,0.4855,0.0193,0.4714,0.0395
83,PhalangesOutlinesCorrect,0.8111,0.0249,0.823,0.0265,0.8021,0.0188
84,Phoneme,1.0,0.0,0.3883,0.0092,0.3596,0.0197
85,PickupGestureWiimoteZ,0.9875,0.027999999999999997,0.76,0.0548,0.76,0.0962
86,PigAirwayPressure,0.7,0.1537,0.0889,0.0241,0.0577,0.0291
87,PigArtPressure,0.7188,0.1625,0.1587,0.0159,0.0929,0.0175
88,PigCVP,0.8766,0.0925,0.0921,0.0207,0.0481,0.0228
89,Plane,0.9719,0.0131,1.0,0.0,0.9762,0.0168
90,PowerCons,0.9828,0.0102,0.9722,0.017,0.9639,0.0158
91,ProximalPhalanxOutlineAgeGroup,0.8102,0.0287,0.838,0.0259,0.8479,0.0296
92,ProximalPhalanxOutlineCorrect,0.8262,0.0373,0.8516,0.0531,0.8259,0.0415
93,ProximalPhalanxTW,0.7969,0.0199,0.7868,0.0108,0.7702,0.0123
94,RefrigerationDevices,0.9307,0.0346,0.6851,0.0174,0.5464,0.0827
95,Rock,0.9812,0.027999999999999997,0.8571,0.0875,0.7571,0.1083
96,ScreenType,0.7604,0.0629,0.5844,0.0213,0.4532,0.0327
97,SemgHandGenderCh2,0.991,0.0161,0.9653,0.0094,0.9415,0.0269
98,SemgHandMovementCh2,1.0,0.0,0.746,0.0235,0.6829999999999999,0.0612
99,SemgHandSubjectCh2,1.0,0.0,0.9508,0.0194,0.9326,0.018000000000000002
100,ShakeGestureWiimoteZ,0.9563,0.0474,0.86,0.0894,0.85,0.0612
101,ShapeletSim,0.8875,0.1766,0.67,0.1178,0.625,0.0919
102,ShapesAll,0.9881,0.0024,0.7865,0.0186,0.7776,0.0191
103,SmallKitchenAppliances,0.9125,0.0577,0.8063,0.0164,0.7274,0.0449
104,SmoothSubspace,0.9844,0.0096,0.9433,0.0508,0.9333,0.0635
105,SonyAIBORobotSurface1,0.9953,0.0043,0.9872,0.0091,0.9807,0.0122
106,SonyAIBORobotSurface2,0.993,0.0085,0.9891,0.0101,0.9886,0.0037
107,StarLightCurves,0.9742,0.0098,0.9740000000000001,0.003,0.972,0.0048
108,Strawberry,0.9391,0.0133,0.9768,0.0134,0.9589,0.0143
109,SwedishLeaf,0.9875,0.0053,0.9649,0.0122,0.9474,0.0212
110,Symbols,0.9984,0.0016,0.9906,0.0045,0.978,0.0154
111,SyntheticControl,0.9883,0.0048,0.995,0.0046,0.99,0.006999999999999999
112,ToeSegmentation1,0.9844,0.0166,0.9481,0.0203,0.9033,0.0418
113,ToeSegmentation2,0.9469,0.0559,0.9055,0.0317,0.7947,0.0823
114,Trace,0.9438,0.0831,0.93,0.0326,0.88,0.0542
115,TwoLeadECG,0.9756,0.0208,0.9992,0.0017,0.9956,0.0055
116,TwoPatterns,0.9994,0.0006,1.0,0.0,0.9988,0.0027
117,UMD,0.9906,0.013999999999999999,0.9944,0.0124,0.95,0.0362
118,UWaveGestureLibraryAll,0.9993,0.0016,0.975,0.0034,0.9737,0.0063
119,UWaveGestureLibraryX,0.9337,0.0228,0.852,0.0051,0.841,0.0134
120,UWaveGestureLibraryY,0.9223,0.0306,0.7786,0.0141,0.7711,0.0154
121,UWaveGestureLibraryZ,0.9059,0.0214,0.7777,0.0114,0.7713,0.01
122,Wafer,0.9991,0.0008,0.9993,0.0009,0.9987,0.0005
123,Wine,0.5219,0.036000000000000004,0.5787,0.052000000000000005,0.5134,0.0674
124,WordSynonyms,0.9949,0.0043,0.772,0.0433,0.7445,0.0086
125,Worms,0.9406,0.0301,0.6038,0.0823,0.5615,0.069
126,WormsTwoClass,0.8313,0.1518,0.7115,0.0804,0.6744,0.0402
127,Yoga,0.9683,0.0204,0.9393,0.0334,0.9249,0.0327
================================================
FILE: ts_classification_methods/tstcc_cls/scripts/fivefold_tstcc_ucr.sh
================================================
python main_ucr.py --dataset ACSF1 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Adiac --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset AllGestureWiimoteX --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset AllGestureWiimoteY --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset AllGestureWiimoteZ --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ArrowHead --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset BME --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Beef --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset BeetleFly --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset BirdChicken --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset CBF --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Car --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Chinatown --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ChlorineConcentration --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset CinCECGTorso --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Coffee --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Computers --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset CricketX --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset CricketY --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset CricketZ --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Crop --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset DiatomSizeReduction --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset DistalPhalanxOutlineAgeGroup --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset DistalPhalanxOutlineCorrect --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset DistalPhalanxTW --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset DodgerLoopDay --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset DodgerLoopGame --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset DodgerLoopWeekend --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ECG200 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ECG5000 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ECGFiveDays --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset EOGHorizontalSignal --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset EOGVerticalSignal --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Earthquakes --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ElectricDevices --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset EthanolLevel --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset FaceAll --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset FaceFour --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset FacesUCR --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset FiftyWords --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Fish --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset FordA --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset FordB --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset FreezerRegularTrain --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset FreezerSmallTrain --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Fungi --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset GestureMidAirD1 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset GestureMidAirD2 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset GestureMidAirD3 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset GesturePebbleZ1 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset GesturePebbleZ2 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset GunPoint --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset GunPointAgeSpan --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset GunPointMaleVersusFemale --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset GunPointOldVersusYoung --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Ham --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset HandOutlines --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Haptics --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Herring --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset HouseTwenty --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset InlineSkate --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset InsectEPGRegularTrain --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset InsectEPGSmallTrain --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset InsectWingbeatSound --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ItalyPowerDemand --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset LargeKitchenAppliances --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Lightning2 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Lightning7 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Mallat --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Meat --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset MedicalImages --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset MelbournePedestrian --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset MiddlePhalanxOutlineAgeGroup --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset MiddlePhalanxOutlineCorrect --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset MiddlePhalanxTW --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset MixedShapesRegularTrain --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset MixedShapesSmallTrain --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset MoteStrain --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset NonInvasiveFetalECGThorax1 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset NonInvasiveFetalECGThorax2 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset OSULeaf --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset OliveOil --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset PLAID --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset PhalangesOutlinesCorrect --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Phoneme --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset PickupGestureWiimoteZ --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset PigAirwayPressure --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset PigArtPressure --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset PigCVP --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Plane --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset PowerCons --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ProximalPhalanxOutlineAgeGroup --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ProximalPhalanxOutlineCorrect --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ProximalPhalanxTW --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset RefrigerationDevices --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Rock --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ScreenType --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset SemgHandGenderCh2 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset SemgHandMovementCh2 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset SemgHandSubjectCh2 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ShakeGestureWiimoteZ --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ShapeletSim --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ShapesAll --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset SmallKitchenAppliances --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset SmoothSubspace --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset SonyAIBORobotSurface1 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset SonyAIBORobotSurface2 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset StarLightCurves --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Strawberry --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset SwedishLeaf --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Symbols --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset SyntheticControl --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ToeSegmentation1 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset ToeSegmentation2 --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Trace --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset TwoLeadECG --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset TwoPatterns --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset UMD --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset UWaveGestureLibraryAll --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset UWaveGestureLibraryX --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset UWaveGestureLibraryY --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset UWaveGestureLibraryZ --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Wafer --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Wine --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset WordSynonyms --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Worms --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset WormsTwoClass --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
python main_ucr.py --dataset Yoga --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42;
================================================
FILE: ts_classification_methods/tstcc_cls/scripts/fivefold_tstcc_uea.sh
================================================
python main_uea.py --dataset ArticularyWordRecognition --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset AtrialFibrillation --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset BasicMotions --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset CharacterTrajectories --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset Cricket --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset DuckDuckGeese --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset EigenWorms --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset Epilepsy --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset EthanolConcentration --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset ERing --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset FaceDetection --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset FingerMovements --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset HandMovementDirection --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset Handwriting --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset Heartbeat --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset InsectWingbeat --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset JapaneseVowels --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset Libras --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset LSST --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset MotorImagery --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset NATOPS --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset PenDigits --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset PEMS-SF --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset PhonemeSpectra --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset RacketSports --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset SelfRegulationSCP1 --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset SelfRegulationSCP2 --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset SpokenArabicDigits --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset StandWalkJump --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset UWaveGestureLibrary --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42;
================================================
FILE: ts_classification_methods/tstcc_cls/scripts/generator_ucr.py
================================================
ucr_dataset = ['ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY', 'AllGestureWiimoteZ', 'ArrowHead', 'BME',
'Beef',
'BeetleFly', 'BirdChicken', 'CBF', 'Car', 'Chinatown', 'ChlorineConcentration', 'CinCECGTorso', 'Coffee',
'Computers',
'CricketX', 'CricketY', 'CricketZ', 'Crop', 'DiatomSizeReduction', 'DistalPhalanxOutlineAgeGroup',
'DistalPhalanxOutlineCorrect', 'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame', 'DodgerLoopWeekend',
'ECG200', 'ECG5000', 'ECGFiveDays', 'EOGHorizontalSignal', 'EOGVerticalSignal', 'Earthquakes',
'ElectricDevices',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords', 'Fish', 'FordA', 'FordB',
'FreezerRegularTrain',
'FreezerSmallTrain', 'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3', 'GesturePebbleZ1',
'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan', 'GunPointMaleVersusFemale', 'GunPointOldVersusYoung',
'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate', 'InsectEPGRegularTrain',
'InsectEPGSmallTrain',
'InsectWingbeatSound', 'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2', 'Lightning7',
'Mallat', 'Meat',
'MedicalImages', 'MelbournePedestrian', 'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain', 'MoteStrain',
'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OSULeaf', 'OliveOil', 'PLAID', 'PhalangesOutlinesCorrect', 'Phoneme',
'PickupGestureWiimoteZ', 'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'Plane', 'PowerCons',
'ProximalPhalanxOutlineAgeGroup', 'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices',
'Rock', 'ScreenType', 'SemgHandGenderCh2', 'SemgHandMovementCh2', 'SemgHandSubjectCh2',
'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace', 'SonyAIBORobotSurface1',
'SonyAIBORobotSurface2', 'StarLightCurves', 'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG', 'TwoPatterns', 'UMD',
'UWaveGestureLibraryAll',
'UWaveGestureLibraryX', 'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine', 'WordSynonyms',
'Worms',
'WormsTwoClass', 'Yoga']
i = 1
for dataset in ucr_dataset:
print("i = ", i, ", dataset = ", dataset)
## python main_ucr.py --dataset Coffee --device cuda:1 --save_csv_name tstcc_0327_ --seed 42
with open('/SSD/lz/time_tsm/tstcc_cls/scripts/fivefold_tstcc_ucr.sh', 'a') as f:
f.write(
'python main_ucr.py --dataset ' + dataset + ' --device cuda:0 --save_csv_name tstcc_ucr_0423_ --seed 42' + ';\n')
i = i + 1
## nohup ./scripts/fivefold_tstcc_ucr.sh &
================================================
FILE: ts_classification_methods/tstcc_cls/scripts/generator_uea.py
================================================
uea_all = ['ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions', 'CharacterTrajectories',
'Cricket', 'DuckDuckGeese', 'EigenWorms', 'Epilepsy', 'EthanolConcentration', 'ERing',
'FaceDetection', 'FingerMovements', 'HandMovementDirection', 'Handwriting',
'Heartbeat', 'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PenDigits', 'PEMS-SF', 'PhonemeSpectra', 'RacketSports', 'SelfRegulationSCP1',
'SelfRegulationSCP2', 'SpokenArabicDigits', 'StandWalkJump', 'UWaveGestureLibrary']
i = 1
for dataset in uea_all:
print("i = ", i, ", dataset = ", dataset)
## python main_uea.py --dataset BasicMotions --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42
with open('/SSD/lz/time_tsm/tstcc_cls/scripts/fivefold_tstcc_uea.sh', 'a') as f:
f.write(
'python main_uea.py --dataset ' + dataset + ' --device cuda:0 --save_csv_name tstcc_uea_0423_ --seed 42' + ';\n')
i = i + 1
## nohup ./scripts/fivefold_tstcc_uea.sh &
================================================
FILE: ts_classification_methods/tstcc_cls/scripts/part_uea_tstcc.sh
================================================
python main_uea.py --dataset PEMS-SF --device cuda:1 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset InsectWingbeat --device cuda:1 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset FaceDetection --device cuda:1 --save_csv_name tstcc_uea_0423_ --seed 42;
python main_uea.py --dataset EigenWorms --device cuda:1 --save_csv_name tstcc_uea_0423_ --seed 42;
================================================
FILE: ts_classification_methods/tstcc_cls/trainer/trainer.py
================================================
import os
import sys
sys.path.append("..")
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from models.loss import NTXentLoss
def Trainer(model, temporal_contr_model, model_optimizer, temp_cont_optimizer, train_dl, valid_dl, test_dl,
device, logger, config, experiment_log_dir, training_mode):
# Start training
logger.debug("Training started ....")
criterion = nn.CrossEntropyLoss()
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(model_optimizer, 'min')
for epoch in range(1, config.num_epoch + 1):
# Train and validate
train_loss, train_acc = model_train(model, temporal_contr_model, model_optimizer, temp_cont_optimizer, criterion, train_dl, config, device, training_mode)
valid_loss, valid_acc, _, _ = model_evaluate(model, temporal_contr_model, valid_dl, device, training_mode)
if training_mode != 'self_supervised': # use scheduler in all other modes.
scheduler.step(valid_loss)
logger.debug(f'\nEpoch : {epoch}\n'
f'Train Loss : {train_loss:.4f}\t | \tTrain Accuracy : {train_acc:2.4f}\n'
f'Valid Loss : {valid_loss:.4f}\t | \tValid Accuracy : {valid_acc:2.4f}')
os.makedirs(os.path.join(experiment_log_dir, "saved_models"), exist_ok=True)
chkpoint = {'model_state_dict': model.state_dict(), 'temporal_contr_model_state_dict': temporal_contr_model.state_dict()}
torch.save(chkpoint, os.path.join(experiment_log_dir, "saved_models", f'ckp_last.pt'))
if training_mode != "self_supervised": # no need to run the evaluation for self-supervised mode.
# evaluate on the test set
logger.debug('\nEvaluate on the Test set:')
test_loss, test_acc, _, _ = model_evaluate(model, temporal_contr_model, test_dl, device, training_mode)
logger.debug(f'Test loss :{test_loss:0.4f}\t | Test Accuracy : {test_acc:0.4f}')
logger.debug("\n################## Training is Done! #########################")
def Trainer_cls(model, temporal_contr_model, model_optimizer, temp_cont_optimizer, train_dl, valid_dl, test_dl, device, logger, config, experiment_log_dir, training_mode):
# Start training
logger.debug("Training started ....")
criterion = nn.CrossEntropyLoss()
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(model_optimizer, 'min')
max_val_acc = 0
end_train_acc = 0
test_acc = 0
test_loss = 0
max_epoch = 0
for epoch in range(1, config.num_epoch + 1):
# Train and validate
train_loss, train_acc = model_train(model, temporal_contr_model, model_optimizer, temp_cont_optimizer, criterion, train_dl, config, device, training_mode)
valid_loss, valid_acc, _, _ = model_evaluate(model, temporal_contr_model, valid_dl, device, training_mode)
if valid_acc > max_val_acc:
max_epoch = epoch
max_val_acc = valid_acc
end_train_acc = train_acc
test_loss, test_acc, _, _ = model_evaluate(model, temporal_contr_model, test_dl, device, training_mode)
if training_mode != 'self_supervised': # use scheduler in all other modes.
scheduler.step(valid_loss)
if epoch % 100 == 0:
logger.debug(f'\nEpoch : {epoch}\n'
f'Train Loss : {train_loss:.4f}\t | \tTrain Accuracy : {train_acc:2.4f}\n'
f'Valid Loss : {valid_loss:.4f}\t | \tValid Accuracy : {valid_acc:2.4f}')
# os.makedirs(os.path.join(experiment_log_dir, "saved_models"), exist_ok=True)
# chkpoint = {'model_state_dict': model.state_dict(), 'temporal_contr_model_state_dict': temporal_contr_model.state_dict()}
# torch.save(chkpoint, os.path.join(experiment_log_dir, "saved_models", f'ckp_last.pt'))
if training_mode != "self_supervised": # no need to run the evaluation for self-supervised mode.
# evaluate on the test set
logger.debug('\nEvaluate on the Test set:')
# test_loss, test_acc, _, _ = model_evaluate(model, temporal_contr_model, test_dl, device, training_mode)
logger.debug(f'Test loss :{test_loss:0.4f}\t | Test Accuracy : {test_acc:0.4f}\t | max_epoch = {max_epoch}')
logger.debug("\n################## Training is Done! #########################")
return end_train_acc, max_val_acc, test_acc
def model_train(model, temporal_contr_model, model_optimizer, temp_cont_optimizer, criterion, train_loader, config, device, training_mode):
total_loss = []
total_acc = []
model.train()
temporal_contr_model.train()
for batch_idx, (data, labels, aug1, aug2) in enumerate(train_loader):
# send to device
data, labels = data.float().to(device), labels.long().to(device)
aug1, aug2 = aug1.float().to(device), aug2.float().to(device)
# optimizer
model_optimizer.zero_grad()
temp_cont_optimizer.zero_grad()
if training_mode == "self_supervised":
predictions1, features1 = model(aug1)
predictions2, features2 = model(aug2)
# normalize projection feature vectors
features1 = F.normalize(features1, dim=1)
features2 = F.normalize(features2, dim=1)
temp_cont_loss1, temp_cont_lstm_feat1 = temporal_contr_model(features1, features2)
temp_cont_loss2, temp_cont_lstm_feat2 = temporal_contr_model(features2, features1)
# normalize projection feature vectors
zis = temp_cont_lstm_feat1
zjs = temp_cont_lstm_feat2
else:
output = model(data)
# compute loss
if training_mode == "self_supervised":
lambda1 = 1
lambda2 = 0.7
nt_xent_criterion = NTXentLoss(device, config.batch_size, config.Context_Cont.temperature,
config.Context_Cont.use_cosine_similarity)
loss = (temp_cont_loss1 + temp_cont_loss2) * lambda1 + nt_xent_criterion(zis, zjs) * lambda2
else: # supervised training or fine tuining
predictions, features = output
loss = criterion(predictions, labels)
total_acc.append(labels.eq(predictions.detach().argmax(dim=1)).float().mean())
total_loss.append(loss.item())
loss.backward()
model_optimizer.step()
temp_cont_optimizer.step()
total_loss = torch.tensor(total_loss).mean()
if training_mode == "self_supervised":
total_acc = 0
else:
total_acc = torch.tensor(total_acc).mean()
return total_loss, total_acc
def model_evaluate(model, temporal_contr_model, test_dl, device, training_mode):
model.eval()
temporal_contr_model.eval()
total_loss = []
total_acc = []
criterion = nn.CrossEntropyLoss()
outs = np.array([])
trgs = np.array([])
with torch.no_grad():
for data, labels, _, _ in test_dl:
data, labels = data.float().to(device), labels.long().to(device)
if training_mode == "self_supervised":
pass
else:
output = model(data)
# compute loss
if training_mode != "self_supervised":
predictions, features = output
loss = criterion(predictions, labels)
total_acc.append(labels.eq(predictions.detach().argmax(dim=1)).float().mean())
total_loss.append(loss.item())
if training_mode != "self_supervised":
pred = predictions.max(1, keepdim=True)[1] # get the index of the max log-probability
outs = np.append(outs, pred.cpu().numpy())
trgs = np.append(trgs, labels.data.cpu().numpy())
if training_mode != "self_supervised":
total_loss = torch.tensor(total_loss).mean() # average loss
else:
total_loss = 0
if training_mode == "self_supervised":
total_acc = 0
return total_loss, total_acc, [], []
else:
total_acc = torch.tensor(total_acc).mean() # average acc
return total_loss, total_acc, outs, trgs
================================================
FILE: ts_classification_methods/tstcc_cls/utils.py
================================================
import logging
import os
import random
import sys
from shutil import copy
import numpy as np
import pandas as pd
import torch
from sklearn.metrics import classification_report, cohen_kappa_score, confusion_matrix, accuracy_score
from tstcc_cls.dataloader.dataloader import Load_Dataset
def generator_ucr_config(data, label, configs):
X = np.reshape(data, (data.shape[0], -1))
Y = label
num_class = np.unique(Y).shape[0]
series_len = X.shape[1]
for i in range(3):
if series_len % 2 == 1:
series_len = series_len + 3
series_len = series_len // 2
else:
series_len = series_len // 2 + 1
configs.features_len = series_len
configs.num_classes = num_class
while X.shape[0] < configs.batch_size:
configs.batch_size = configs.batch_size // 2
# print("num_class = ", num_class, ", features_len = ", features_len)
def generator_ucr(data, label, configs, training_mode, drop_last=True):
# print("Raw data shape = ", data.shape)
data = np.reshape(data, (data.shape[0], -1))
# print("New data shape = ", data.shape)
data_dict = dict()
data_dict["samples"] = torch.from_numpy(data).unsqueeze(1)
# print("samples data shape = ", data_dict["samples"].shape)
data_dict["labels"] = torch.from_numpy(label)
tr_dataset = Load_Dataset(data_dict, configs, training_mode)
tr_loader = torch.utils.data.DataLoader(dataset=tr_dataset, batch_size=configs.batch_size,
shuffle=True, drop_last=drop_last,
num_workers=0)
return tr_loader
def generator_uea_config(data, label, configs):
Y = label
num_class = np.unique(Y).shape[0]
series_len = data.shape[1]
for i in range(3):
if series_len % 2 == 1:
series_len = series_len + 3
series_len = series_len // 2
else:
series_len = series_len // 2 + 1
configs.features_len = series_len
configs.num_classes = num_class
configs.input_channels = data.shape[2]
while data.shape[0] < configs.batch_size:
configs.batch_size = configs.batch_size // 2
def generator_uea(data, label, configs, training_mode, drop_last=True):
data_dict = dict()
# print("shape = ", data.shape)
data_dict["samples"] = torch.from_numpy(data)
data_dict["labels"] = torch.from_numpy(label)
tr_dataset = Load_Dataset(data_dict, configs, training_mode)
tr_loader = torch.utils.data.DataLoader(dataset=tr_dataset, batch_size=configs.batch_size,
shuffle=True, drop_last=drop_last,
num_workers=0)
return tr_loader
def set_requires_grad(model, dict_, requires_grad=True):
for param in model.named_parameters():
if param[0] in dict_:
param[1].requires_grad = requires_grad
def fix_randomness(SEED):
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
def _calc_metrics(pred_labels, true_labels, log_dir, home_path):
pred_labels = np.array(pred_labels).astype(int)
true_labels = np.array(true_labels).astype(int)
# save targets
labels_save_path = os.path.join(log_dir, "labels")
os.makedirs(labels_save_path, exist_ok=True)
np.save(os.path.join(labels_save_path, "predicted_labels.npy"), pred_labels)
np.save(os.path.join(labels_save_path, "true_labels.npy"), true_labels)
r = classification_report(true_labels, pred_labels, digits=6, output_dict=True)
cm = confusion_matrix(true_labels, pred_labels)
df = pd.DataFrame(r)
df["cohen"] = cohen_kappa_score(true_labels, pred_labels)
df["accuracy"] = accuracy_score(true_labels, pred_labels)
df = df * 100
# save classification report
exp_name = os.path.split(os.path.dirname(log_dir))[-1]
training_mode = os.path.basename(log_dir)
file_name = f"{exp_name}_{training_mode}_classification_report.xlsx"
report_Save_path = os.path.join(home_path, log_dir, file_name)
df.to_excel(report_Save_path)
# save confusion matrix
cm_file_name = f"{exp_name}_{training_mode}_confusion_matrix.torch"
cm_Save_path = os.path.join(home_path, log_dir, cm_file_name)
torch.save(cm, cm_Save_path)
def _logger(logger_name, level=logging.DEBUG):
"""
Method to return a custom logger with the given name and level
"""
logger = logging.getLogger(logger_name)
logger.setLevel(level)
# format_string = ("%(asctime)s — %(name)s — %(levelname)s — %(funcName)s:"
# "%(lineno)d — %(message)s")
format_string = "%(message)s"
log_format = logging.Formatter(format_string)
# Creating and adding the console handler
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(log_format)
logger.addHandler(console_handler)
# Creating and adding the file handler
file_handler = logging.FileHandler(logger_name, mode='a')
file_handler.setFormatter(log_format)
logger.addHandler(file_handler)
return logger
def copy_Files(destination, data_type):
destination_dir = os.path.join(destination, "model_files")
os.makedirs(destination_dir, exist_ok=True)
copy("main.py", os.path.join(destination_dir, "main.py"))
copy("trainer/trainer.py", os.path.join(destination_dir, "trainer.py"))
copy(f"config_files/{data_type}_Configs.py", os.path.join(destination_dir, f"{data_type}_Configs.py"))
copy("dataloader/augmentations.py", os.path.join(destination_dir, "augmentations.py"))
copy("dataloader/dataloader.py", os.path.join(destination_dir, "dataloader.py"))
copy(f"models/model.py", os.path.join(destination_dir, f"model.py"))
copy("models/loss.py", os.path.join(destination_dir, "loss.py"))
copy("models/TC.py", os.path.join(destination_dir, "TC.py"))
================================================
FILE: ts_classification_methods/visualize.py
================================================
import argparse
import os
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn
from scipy.interpolate import interp1d
from data import normalize_per_series
from model import FCN, DilatedConvolutionVis, Classifier
from tsm_utils import load_data, transfer_labels, set_seed
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
DEVICE = torch.device('cuda:0' if torch.cuda.is_available() else "cpu")
def heatmap(xs, ys, dataset_name='MixedShapesSmallTrain', num_class=5, cls=4):
model = FCN(num_class)
model.to(DEVICE)
ts1 = plt.subplot2grid((2, 15), loc=(0, 0), colspan=4, rowspan=1)
hm1 = plt.subplot2grid((2, 15), loc=(1, 0), colspan=4)
ts2 = plt.subplot2grid((2, 15), loc=(0, 5), colspan=4, rowspan=1)
hm2 = plt.subplot2grid((2, 15), loc=(1, 5), colspan=4)
ts3 = plt.subplot2grid((2, 15), loc=(0, 10), colspan=4, rowspan=1)
hm3 = plt.subplot2grid((2, 15), loc=(1, 10), colspan=4)
x0s = xs[np.where(ys == cls)]
x0_mean = np.mean(x0s, axis=1)
x0_mean_mean = np.mean(x0_mean, axis=0)
class0 = x0s[np.where(np.abs(x0_mean - x0_mean_mean) == min(np.abs(x0_mean - x0_mean_mean)))[0][0]]
x1 = class0
x_copy = x1
# direct cls
model.load_state_dict(
torch.load('./visuals/' + dataset_name + '/direct_fcn_linear_encoder_weights.pt', map_location='cuda:0'))
model.eval()
ts1.set_title('Direct classification')
ts1.plot(range(x_copy.shape[0]), x_copy)
x1 = torch.from_numpy(x1).to(DEVICE)
x1 = torch.unsqueeze(x1, 0)
x1 = torch.unsqueeze(x1, 0)
gaps, feature = model(x1, vis=True)
gaps = torch.squeeze(gaps)
feature = torch.squeeze(feature)
feature = feature[torch.topk((gaps - gaps.mean()) ** 2, k=16).indices, :].cpu()
hm1.pcolormesh(feature[0:16], shading='nearest')
# supervised transfer
# model.load_state_dict(torch.load('./visuals/' + dataset_name + '/fcn_nonlinear_encoder_finetune_weights_UWaveGestureLibraryZ.pt',map_location='cuda:0'))
model.load_state_dict(
torch.load('./visuals/' + dataset_name + '/fcn_linear_encoder_finetune_weights_UWaveGestureLibraryZ.pt',
map_location='cuda:0'))
model.eval()
ts2.set_title('Positive transfer')
ts2.plot(range(x_copy.shape[0]), x_copy)
gaps, feature = model(x1, vis=True)
gaps = torch.squeeze(gaps)
feature = torch.squeeze(feature)
feature = feature[torch.topk((gaps - gaps.mean()) ** 2, k=16).indices, :].cpu()
hm2.pcolormesh(feature[0:16], shading='nearest')
# model.load_state_dict(torch.load('./visuals/' + dataset_name + '/fcn_nonlinear_encoder_finetune_weights_ElectricDevices.pt',map_location='cuda:0'))
model.load_state_dict(
torch.load('./visuals/' + dataset_name + '/fcn_linear_encoder_finetune_weights_Crop.pt',
map_location='cuda:0'))
model.eval()
ts3.set_title('Negative transfer')
ts3.plot(range(x_copy.shape[0]), x_copy)
gaps, feature = model(x1, vis=True)
gaps = torch.squeeze(gaps)
feature = torch.squeeze(feature)
feature = feature[torch.topk((gaps - gaps.mean()) ** 2, k=16).indices, :].cpu()
hm3.pcolormesh(feature[0:16], shading='nearest')
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=0.15, hspace=0.30)
plt.tight_layout()
plt.savefig('./visuals/' + dataset_name + '_postive_negative.png')
plt.savefig('./visuals/' + dataset_name + '_postive_negative.pdf')
def multi_cam(xs, ys):
# sampling
x0s = xs[np.where(ys == 0)]
x1s = xs[np.where(ys == 1)]
x0_mean = np.mean(x0s, axis=1)
x0_mean_mean = np.mean(x0_mean, axis=0)
class0 = x0s[np.where(np.abs(x0_mean - x0_mean_mean) == min(np.abs(x0_mean - x0_mean_mean)))]
# class0 = np.expand_dims(class0, 0)
print(class0.shape)
x1_mean = np.mean(x1s, axis=1)
x1_mean_mean = np.mean(x1_mean, axis=0)
class1 = x1s[np.where(np.abs(x1_mean - x1_mean_mean) == min(np.abs(x1_mean - x1_mean_mean)))][0]
class1 = np.expand_dims(class1, 0)
print(class1.shape)
# print(class0.mean())
# print(class1.mean())
def cam(x, label):
x = torch.from_numpy(x).to(DEVICE)
# x = torch.unsqueeze(x, 0)
x = torch.unsqueeze(x, 0)
features, vis_out = model(x, vis=True)
pred = classifier(features)
w_k_c = classifier.state_dict()['dense.weight']
cas = np.zeros(dtype=np.float16, shape=(vis_out.shape[2]))
for k, w in enumerate(w_k_c[label, :]):
cas += (w * vis_out[0, k, :]).cpu().numpy()
minimum = np.min(cas)
# print(cas)
cas = cas - minimum
cas = cas / max(cas)
cas = cas * 100
x = x.cpu().numpy()
plt_x = np.linspace(0, x.shape[2] - 1, 2000, endpoint=True)
f = interp1d(range(x.shape[2]), x.squeeze())
y = f(plt_x)
f = interp1d(range(x.shape[2]), cas)
cas = f(plt_x).astype(int)
plt.scatter(x=plt_x, y=y, c=cas, cmap='jet', marker='.', s=2, vmin=0, vmax=100, linewidths=1.0)
plt.yticks([-1.0, 0.0, 1.0])
plt.figure()
model = FCN(2).to(DEVICE)
classifier = Classifier(128, 2).to(DEVICE)
model.load_state_dict(torch.load('./visuals/GunPoint/direct_fcn_encoder.pt', map_location='cuda:0'))
classifier.load_state_dict(torch.load('./visuals/GunPoint/direct_fcn_classifier.pt', map_location='cuda:0'))
model.eval()
classifier.eval()
x1 = torch.from_numpy(xs).to(DEVICE)
x1 = torch.unsqueeze(x1, 1)
features, _ = model(x1, vis=True)
val_pred = features
val_pred = classifier(val_pred)
ys1 = torch.from_numpy(ys).to(DEVICE)
val_accu = torch.sum(torch.argmax(val_pred.data, axis=1) == ys1)
val_accu = val_accu / len(ys)
print("val accuracy direct = ", val_accu)
ax1 = plt.subplot(4, 1, 1)
plt.title('Direct classification via FCN (100%)')
cam(class0, 0)
cam(class1, 1)
model = DilatedConvolutionVis(in_channels=1, embedding_channels=40, out_channels=320, depth=3,
reduced_size=320, kernel_size=3, num_classes=2).to(DEVICE)
classifier = Classifier(320, 2).to(DEVICE)
model.load_state_dict(
torch.load('./visuals/GunPoint/direct_dilated_encoder.pt', map_location='cuda:0'))
classifier.load_state_dict(
torch.load('./visuals/GunPoint/direct_dilated_classifier.pt', map_location='cuda:0'))
model.eval()
classifier.eval()
features, _ = model(x1, vis=True)
val_pred = features
val_pred = classifier(val_pred)
ys1 = torch.from_numpy(ys).to(DEVICE)
val_accu = torch.sum(torch.argmax(val_pred.data, axis=1) == ys1)
val_accu = val_accu / len(ys)
print("val accuracy dilated = ", val_accu)
ax2 = plt.subplot(4, 1, 2)
plt.title('Direct classification via TCN (50%)')
cam(class0, 0)
cam(class1, 1)
model = FCN(2).to(DEVICE)
classifier = Classifier(128, 2).to(DEVICE)
model.load_state_dict(
torch.load('./visuals/GunPoint/supervised_encoder_UWaveGestureLibraryX_linear.pt', map_location='cuda:0'))
classifier.load_state_dict(
torch.load('./visuals/GunPoint/supervised_classifier_UWaveGestureLibraryX_linear.pt', map_location='cuda:0'))
model.eval()
classifier.eval()
features, _ = model(x1, vis=True)
val_pred = features
val_pred = classifier(val_pred)
ys1 = torch.from_numpy(ys).to(DEVICE)
val_accu = torch.sum(torch.argmax(val_pred.data, axis=1) == ys1)
val_accu = val_accu / len(ys)
print("val accuracy supervised = ", val_accu)
ax3 = plt.subplot(4, 1, 3)
plt.title('Supervised transfer via FCN (100%)')
cam(class0, 0)
cam(class1, 1)
model.load_state_dict(
torch.load('./visuals/GunPoint/unsupervised_encoder_UWaveGestureLibraryX_linear.pt', map_location='cuda:0'))
classifier.load_state_dict(
torch.load('./visuals/GunPoint/unsupervised_classifier_UWaveGestureLibraryX_linear.pt', map_location='cuda:0'))
model.eval()
classifier.eval()
features, _ = model(x1, vis=True)
val_pred = features
val_pred = classifier(val_pred)
ys1 = torch.from_numpy(ys).to(DEVICE)
val_accu = torch.sum(torch.argmax(val_pred.data, axis=1) == ys1)
val_accu = val_accu / len(ys)
print("val accuracy unsupervised = ", val_accu)
ax4 = plt.subplot(4, 1, 4)
plt.title('Unsupervised transfer via FCN decoder (98.5%)')
cam(class0, 0)
cam(class1, 1)
plt.colorbar(ax=[ax1, ax2, ax3, ax4]) # Add a color scale bar on the right side
plt.subplots_adjust(left=None, bottom=None, right=0.75, top=None, wspace=0.00, hspace=0.9)
# plt.tight_layout()
plt.savefig('./visuals/fcn_dilated_supervised_unsupervised_transfer.png', bbox_inches='tight')
plt.savefig('./visuals/fcn_dilated_supervised_unsupervised_transfer.pdf', bbox_inches='tight')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--dataroot', type=str, default='/SSD/lz/UCRArchive_2018',
help='data root') ## /dev_data/zzj/hzy/datasets/UCR
parser.add_argument('--dataset', type=str, default='GunPoint',
help='dataset name') ## Wine GunPoint FreezerSmallTrain MixedShapesSmallTrain
parser.add_argument('--backbone', type=str, choices=['dilated', 'fcn'], default='fcn', help='encoder backbone')
parser.add_argument('--graph', type=str, choices=['cam', 'heatmap', 'tsne'], default='cam')
parser.add_argument('--random_seed', type=int, default=42, help='shuffle seed')
args = parser.parse_args()
set_seed(args)
xs, ys, num_classes = load_data(args.dataroot, args.dataset)
xs = normalize_per_series(xs)
ys = transfer_labels(ys)
if args.graph == 'cam':
multi_cam(xs, ys)
elif args.graph == 'heatmap':
heatmap(xs, ys, dataset_name='Wine', num_class=2, cls=0)
================================================
FILE: ts_forecasting_methods/CoST/CODEOWNERS
================================================
# Comment line immediately above ownership line is reserved for related other information. Please be careful while editing.
#ECCN:Open Source
================================================
FILE: ts_forecasting_methods/CoST/CODE_OF_CONDUCT.md
================================================
# Salesforce Open Source Community Code of Conduct
## About the Code of Conduct
Equality is a core value at Salesforce. We believe a diverse and inclusive
community fosters innovation and creativity, and are committed to building a
culture where everyone feels included.
Salesforce open-source projects are committed to providing a friendly, safe, and
welcoming environment for all, regardless of gender identity and expression,
sexual orientation, disability, physical appearance, body size, ethnicity, nationality,
race, age, religion, level of experience, education, socioeconomic status, or
other similar personal characteristics.
The goal of this code of conduct is to specify a baseline standard of behavior so
that people with different social values and communication styles can work
together effectively, productively, and respectfully in our open source community.
It also establishes a mechanism for reporting issues and resolving conflicts.
All questions and reports of abusive, harassing, or otherwise unacceptable behavior
in a Salesforce open-source project may be reported by contacting the Salesforce
Open Source Conduct Committee at ossconduct@salesforce.com.
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of gender
identity and expression, sexual orientation, disability, physical appearance,
body size, ethnicity, nationality, race, age, religion, level of experience, education,
socioeconomic status, or other similar personal characteristics.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy toward other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Personal attacks, insulting/derogatory comments, or trolling
* Public or private harassment
* Publishing, or threatening to publish, others' private information—such as
a physical or electronic address—without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
* Advocating for or encouraging any of the above behaviors
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned with this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project email
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the Salesforce Open Source Conduct Committee
at ossconduct@salesforce.com. All complaints will be reviewed and investigated
and will result in a response that is deemed necessary and appropriate to the
circumstances. The committee is obligated to maintain confidentiality with
regard to the reporter of an incident. Further details of specific enforcement
policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership and the Salesforce Open Source Conduct
Committee.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][contributor-covenant-home],
version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html.
It includes adaptions and additions from [Go Community Code of Conduct][golang-coc],
[CNCF Code of Conduct][cncf-coc], and [Microsoft Open Source Code of Conduct][microsoft-coc].
This Code of Conduct is licensed under the [Creative Commons Attribution 3.0 License][cc-by-3-us].
[contributor-covenant-home]: https://www.contributor-covenant.org (https://www.contributor-covenant.org/)
[golang-coc]: https://golang.org/conduct
[cncf-coc]: https://github.com/cncf/foundation/blob/master/code-of-conduct.md
[microsoft-coc]: https://opensource.microsoft.com/codeofconduct/
[cc-by-3-us]: https://creativecommons.org/licenses/by/3.0/us/
================================================
FILE: ts_forecasting_methods/CoST/LICENSE.txt
================================================
Copyright (c) 2022, Salesforce.com, Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of Salesforce.com nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
================================================
FILE: ts_forecasting_methods/CoST/README.md
================================================
# CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting
Official PyTorch code repository for the [CoST paper](https://openreview.net/forum?id=PilZY3omXV2).
## Data
The datasets can be obtained and put into `datasets/` folder in the following way:
* [3 ETT datasets](https://github.com/zhouhaoyi/ETDataset) should be placed at `datasets/ETTh1.csv`, `datasets/ETTh2.csv` and `datasets/ETTm1.csv`.
* [Electricity dataset](https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014) placed at `datasets/LD2011_2014.txt` and run `electricity.py`.
## Usage
To train and evaluate CoST on a dataset, run the following command:
```train & evaluate
python train.py --archive --batch-size --repr-dims --gpu --eval
```
The detailed descriptions about the arguments are as following:
| Parameter name | Description of parameter |
| --- | --- |
| dataset_name | The dataset name |
| run_name | The folder name used to save model, output and evaluation metrics. This can be set to any word |
| archive | The archive name that the dataset belongs to. This can be set to `forecast_csv` or `forecast_csv_univar` |
| batch_size | The batch size (defaults to 8) |
| repr_dims | The representation dimensions (defaults to 320) |
| gpu | The gpu no. used for training and inference (defaults to 0) |
| eval | Whether to perform evaluation after training |
| kernels | Kernel sizes for mixture of AR experts module |
| alpha | Weight for loss function |
(For descriptions of more arguments, run `python train.py -h`.)
After training and evaluation, the trained encoder, output and evaluation metrics can be found in `training//__