[
  {
    "path": ".vscode/settings.json",
    "content": "{\n    \"python.pythonPath\": \"C:\\\\Files\\\\APPs\\\\RuanJian\\\\Miniconda3\\\\envs\\\\TF_GPU\\\\python.exe\"\n}"
  },
  {
    "path": "README.md",
    "content": "# [深度应用]·DC竞赛轴承故障检测开源Baseline（基于Keras1D卷积 val_acc:0.99780）\n\n> 个人网站--> [http://www.yansongsong.cn](http://www.yansongsong.cn/)\n> \n> Github项目地址--> [https://github.com/xiaosongshine/bearing_detection_by_conv1d](https://github.com/xiaosongshine/bearing_detection_by_conv1d)\n\n  \n\n## 大赛简介\n\n轴承是在机械设备中具有广泛应用的关键部件之一。由于过载，疲劳，磨损，腐蚀等原因，轴承在机器操作过程中容易损坏。事实上，超过50％的旋转机器故障与轴承故障有关。实际上，滚动轴承故障可能导致设备剧烈摇晃，设备停机，停止生产，甚至造成人员伤亡。一般来说，早期的轴承弱故障是复杂的，难以检测。因此，轴承状态的监测和分析非常重要，它可以发现轴承的早期弱故障，防止故障造成损失。 最近，轴承的故障检测和诊断一直备受关注。在所有类型的轴承故障诊断方法中，振动信号分析是最主要和有用的工具之一。 在这次比赛中，我们提供一个真实的轴承振动信号数据集，选手需要使用机器学习技术判断轴承的工作状态。\n\n[竞赛网站](http://www.pkbigdata.com/common/cmpt/%E8%BD%B4%E6%89%BF%E6%95%85%E9%9A%9C%E6%A3%80%E6%B5%8B%E8%AE%AD%E7%BB%83%E8%B5%9B_%E6%8E%92%E8%A1%8C%E6%A6%9C.html)\n\n  \n\n## 数据介绍\n\n轴承有3种故障：外圈故障，内圈故障，滚珠故障，外加正常的工作状态。如表1所示，结合轴承的3种直径（直径1,直径2,直径3），轴承的工作状态有10类：\n\n![](https://img-blog.csdnimg.cn/20190926141237674.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly94aWFvc29uZ3NoaW5lLmJsb2cuY3Nkbi5uZXQ=,size_16,color_FFFFFF,t_70)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== \"点击并拖拽以移动\")​\n\n**参赛选手需要设计模型根据轴承运行中的振动信号对轴承的工作状态进行分类。**\n\n  \n\n1.train.csv，训练集数据，1到6000为按时间序列连续采样的振动信号数值，每行数据是一个样本，共792条数据，第一列id字段为样本编号，最后一列label字段为标签数据，即轴承的工作状态，用数字0到9表示。\n\n2.test_data.csv，测试集数据，共528条数据，除无label字段外，其他字段同训练集。 总的来说，每行数据除去id和label后是轴承一段时间的振动信号数据，选手需要用这些振动信号去判定轴承的工作状态label。\n\n注意：同一列的数据不一定是同一个时间点的采样数据，即不要把每一列当作一个特征\n\n  \n\n**[点击下载数据](http://mad-net.org:8765/explore.html?t=0.5831516555847212)**\n\n  \n\n***数据下载具体操作：**\n\n![](https://img-blog.csdnimg.cn/2019092614125199.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly94aWFvc29uZ3NoaW5lLmJsb2cuY3Nkbi5uZXQ=,size_16,color_FFFFFF,t_70)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== \"点击并拖拽以移动\")​\n\n**ps：注册登陆后方可下载**\n\n  \n\n----------\n\n**评分标准**\n\n评分算法  \nbinary-classification\n\n采用各个品类F1指标的算术平均值，它是Precision 和 Recall 的调和平均数。\n\n![](https://img-blog.csdnimg.cn/20190926141308214.png)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== \"点击并拖拽以移动\")​\n\n其中，Pi是表示第i个种类对应的Precision， Ri是表示第i个种类对应Recall。\n\n  \n\n## 赛题分析\n\n简单分析一下，这个比赛大家可以简单的理解为一个10分类的问题，输入的形状为(-1,6000)，网络输出的结果为(-1,10)（此处采用onehot形式）\n\n赛题就是一个十分类预测问题，解题思路应该包括以下内容\n\n1.  数据读取与处理\n2.  网络模型搭建\n3.  模型的训练\n4.  模型应用与提交预测结果\n\n  \n\n## 实战应用\n\n经过对赛题的分析，我们把任务分成四个小任务，首先第一步是：\n\n### 1.数据读取与处理\n\n数据是CSV文件，1到6000为按时间序列连续采样的振动信号数值，每行数据是一个样本，共792条数据，第一列id字段为样本编号，最后一列label字段为标签数据，即轴承的工作状态，用数字0到9表示。\n\n**数据处理函数定义：**\n\n```python\n\nimport keras\nfrom scipy.io import loadmat\nimport matplotlib.pyplot as plt\nimport glob\nimport numpy as np\nimport pandas as pd\nimport math\nimport os\nfrom keras.layers import *\nfrom keras.models import *\nfrom keras.optimizers import *\nimport numpy as np\n\nMANIFEST_DIR = \"Bear_data/train.csv\"\nBatch_size = 20\nLong = 792\nLens = 640\n\n#把标签转成oneHot\ndef convert2oneHot(index,Lens):\n    hot = np.zeros((Lens,))\n    hot[int(index)] = 1\n    return(hot)\n\ndef xs_gen(path=MANIFEST_DIR,batch_size = Batch_size,train=True,Lens=Lens):\n\n    img_list = pd.read_csv(path)\n    if train:\n        img_list = np.array(img_list)[:Lens]\n        print(\"Found %s train items.\"%len(img_list))\n        print(\"list 1 is\",img_list[0,-1])\n        steps = math.ceil(len(img_list) / batch_size)    # 确定每轮有多少个batch\n    else:\n        img_list = np.array(img_list)[Lens:]\n        print(\"Found %s test items.\"%len(img_list))\n        print(\"list 1 is\",img_list[0,-1])\n        steps = math.ceil(len(img_list) / batch_size)    # 确定每轮有多少个batch\n    while True:\n        for i in range(steps):\n\n            batch_list = img_list[i * batch_size : i * batch_size + batch_size]\n            np.random.shuffle(batch_list)\n            batch_x = np.array([file for file in batch_list[:,1:-1]])\n            batch_y = np.array([convert2oneHot(label,10) for label in batch_list[:,-1]])\n\n            yield batch_x, batch_y\n\nTEST_MANIFEST_DIR = \"Bear_data/test_data.csv\"\n\ndef ts_gen(path=TEST_MANIFEST_DIR,batch_size = Batch_size):\n\n    img_list = pd.read_csv(path)\n\n    img_list = np.array(img_list)[:Lens]\n    print(\"Found %s train items.\"%len(img_list))\n    print(\"list 1 is\",img_list[0,-1])\n    steps = math.ceil(len(img_list) / batch_size)    # 确定每轮有多少个batch\n    while True:\n        for i in range(steps):\n\n            batch_list = img_list[i * batch_size : i * batch_size + batch_size]\n            #np.random.shuffle(batch_list)\n            batch_x = np.array([file for file in batch_list[:,1:]])\n            #batch_y = np.array([convert2oneHot(label,10) for label in batch_list[:,-1]])\n\n            yield batch_x\n```\n\n![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== \"点击并拖拽以移动\")\n\n**读取一条数据进行显示**\n\n```python\nif __name__ == \"__main__\":\n    if Show_one == True:\n        show_iter = xs_gen()\n        for x,y in show_iter:\n            x1 = x[0]\n            y1 = y[0]\n            break\n        print(y)\n        print(x1.shape)\n        plt.plot(x1)\n        plt.show()\n```\n\n![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== \"点击并拖拽以移动\")\n\n  \n\n![](https://img-blog.csdnimg.cn/20190411181731183.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3hpYW9zb25nc2hpbmU=,size_16,color_FFFFFF,t_70)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== \"点击并拖拽以移动\")​\n\n我们由上述信息可以看出每种导联都是由6000个点组成的列表，大家可以理解为mnist展开为一维后的形状\n\n  \n\n**原始训练数据乱序操作**\n\n```python\ndef create_csv(TXT_DIR=MANIFEST_DIR):\n    lists = pd.read_csv(TXT_DIR,sep=r\"\\t\",header=None)\n    lists = lists.sample(frac=1)\n    lists.to_csv(MANIFEST_DIR,index=None)\n    print(\"Finish save csv\")\n```\n\n![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== \"点击并拖拽以移动\")\n\n  \n\n数据读取的方式我采用的是生成器的方式，这样可以按batch读取，加快训练速度，大家也可以采用一下全部读取，看个人的习惯了。关于生成器介绍，大家可以参考我的这篇博文\n\n[[开发技巧]·深度学习使用生成器加速数据读取与训练简明教程（TensorFlow，pytorch，keras）](https://blog.csdn.net/xiaosongshine/article/details/89213360)\n\n  \n\n### 2.网络模型搭建\n\n数据我们处理好了，后面就是模型的搭建了，我使用keras搭建的，操作简单便捷，tf，pytorch，sklearn大家可以按照自己喜好来。\n\n网络模型可以选择CNN，RNN，Attention结构，或者多模型的融合，抛砖引玉，此Baseline采用的一维CNN方式，[一维CNN学习地址](https://blog.csdn.net/xiaosongshine/article/details/88614450)\n\n**模型搭建**\n\n```python\nTIME_PERIODS = 6000\ndef build_model(input_shape=(TIME_PERIODS,),num_classes=10):\n    model = Sequential()\n    model.add(Reshape((TIME_PERIODS, 1), input_shape=input_shape))\n    model.add(Conv1D(16, 8,strides=2, activation='relu',input_shape=(TIME_PERIODS,1)))\n\n    model.add(Conv1D(16, 8,strides=2, activation='relu',padding=\"same\"))\n    model.add(MaxPooling1D(2))\n\n    model.add(Conv1D(64, 4,strides=2, activation='relu',padding=\"same\"))\n    model.add(Conv1D(64, 4,strides=2, activation='relu',padding=\"same\"))\n    model.add(MaxPooling1D(2))\n    model.add(Conv1D(256, 4,strides=2, activation='relu',padding=\"same\"))\n    model.add(Conv1D(256, 4,strides=2, activation='relu',padding=\"same\"))\n    model.add(MaxPooling1D(2))\n    model.add(Conv1D(512, 2,strides=1, activation='relu',padding=\"same\"))\n    model.add(Conv1D(512, 2,strides=1, activation='relu',padding=\"same\"))\n    model.add(MaxPooling1D(2))\n\n    model.add(GlobalAveragePooling1D())\n    model.add(Dropout(0.3))\n    model.add(Dense(num_classes, activation='softmax'))\n    return(model)\n\n```\n\n![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== \"点击并拖拽以移动\")\n\n**用model.summary()输出的网络模型为**\n\n```bash\n_________________________________________________________________\nLayer (type)                 Output Shape              Param #\n=================================================================\nreshape_1 (Reshape)          (None, 6000, 1)           0\n_________________________________________________________________\nconv1d_1 (Conv1D)            (None, 2997, 16)          144\n_________________________________________________________________\nconv1d_2 (Conv1D)            (None, 1499, 16)          2064\n_________________________________________________________________\nmax_pooling1d_1 (MaxPooling1 (None, 749, 16)           0\n_________________________________________________________________\nconv1d_3 (Conv1D)            (None, 375, 64)           4160\n_________________________________________________________________\nconv1d_4 (Conv1D)            (None, 188, 64)           16448\n_________________________________________________________________\nmax_pooling1d_2 (MaxPooling1 (None, 94, 64)            0\n_________________________________________________________________\nconv1d_5 (Conv1D)            (None, 47, 256)           65792\n_________________________________________________________________\nconv1d_6 (Conv1D)            (None, 24, 256)           262400\n_________________________________________________________________\nmax_pooling1d_3 (MaxPooling1 (None, 12, 256)           0\n_________________________________________________________________\nconv1d_7 (Conv1D)            (None, 12, 512)           262656\n_________________________________________________________________\nconv1d_8 (Conv1D)            (None, 12, 512)           524800\n_________________________________________________________________\nmax_pooling1d_4 (MaxPooling1 (None, 6, 512)            0\n_________________________________________________________________\nglobal_average_pooling1d_1 ( (None, 512)               0\n_________________________________________________________________\ndropout_1 (Dropout)          (None, 512)               0\n_________________________________________________________________\ndense_1 (Dense)              (None, 10)                5130\n=================================================================\nTotal params: 1,143,594\nTrainable params: 1,143,594\nNon-trainable params: 0\n_________________________________________________________________\nNone\n```\n\n![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== \"点击并拖拽以移动\")\n\n训练参数比较少，大家可以根据自己想法更改。\n\n### 3.网络模型训练\n\n**模型训练**\n\n```python\nShow_one = True\n\nTrain = True\n\nif __name__ == \"__main__\":\n    if Show_one == True:\n        show_iter = xs_gen()\n        for x,y in show_iter:\n            x1 = x[0]\n            y1 = y[0]\n            break\n        print(y)\n        print(x1.shape)\n        plt.plot(x1)\n        plt.show()\n\n\n    if Train == True:\n        train_iter = xs_gen()\n        val_iter = xs_gen(train=False)\n\n        ckpt = keras.callbacks.ModelCheckpoint(\n            filepath='best_model.{epoch:02d}-{val_loss:.4f}.h5',\n            monitor='val_loss', save_best_only=True,verbose=1)\n\n        model = build_model()\n        opt = Adam(0.0002)\n        model.compile(loss='categorical_crossentropy',\n                    optimizer=opt, metrics=['accuracy'])\n        print(model.summary())\n\n        model.fit_generator(\n            generator=train_iter,\n            steps_per_epoch=Lens//Batch_size,\n            epochs=50,\n            initial_epoch=0,\n            validation_data = val_iter,\n            nb_val_samples = (Long - Lens)//Batch_size,\n            callbacks=[ckpt],\n            )\n        model.save(\"finishModel.h5\")\n    else:\n        test_iter = ts_gen()\n        model = load_model(\"best_model.49-0.00.h5\")\n        pres = model.predict_generator(generator=test_iter,steps=math.ceil(528/Batch_size),verbose=1)\n        print(pres.shape)\n        ohpres = np.argmax(pres,axis=1)\n        print(ohpres.shape)\n        #img_list = pd.read_csv(TEST_MANIFEST_DIR)\n        df = pd.DataFrame()\n        df[\"id\"] = np.arange(1,len(ohpres)+1)\n        df[\"label\"] = ohpres\n        df.to_csv(\"submmit.csv\",index=None)\n\n\n```\n\n![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== \"点击并拖拽以移动\")\n\n**训练过程输出（最优结果：32/32 [==============================] - 1s 33ms/step - loss: 0.0098 - acc: 0.9969 - val_loss: 0.0172 - val_acc: 0.9924）**\n\n```bash\nEpoch 46/50\n32/32 [==============================] - 1s 33ms/step - loss: 0.0638 - acc: 0.9766 - val_loss: 0.2460 - val_acc: 0.9242\n\nEpoch 00046: val_loss did not improve from 0.00354\nEpoch 47/50\n32/32 [==============================] - 1s 33ms/step - loss: 0.0426 - acc: 0.9859 - val_loss: 0.0641 - val_acc: 0.9848\n\nEpoch 00047: val_loss did not improve from 0.00354\nEpoch 48/50\n32/32 [==============================] - 1s 33ms/step - loss: 0.0148 - acc: 0.9969 - val_loss: 0.0072 - val_acc: 1.0000\n\nEpoch 00048: val_loss did not improve from 0.00354\nEpoch 49/50\n32/32 [==============================] - 1s 34ms/step - loss: 0.0061 - acc: 0.9984 - val_loss: 0.0404 - val_acc: 0.9857\n\nEpoch 00049: val_loss did not improve from 0.00354\nEpoch 50/50\n32/32 [==============================] - 1s 33ms/step - loss: 0.0098 - acc: 0.9969 - val_loss: 0.0172 - val_acc: 0.9924\n```\n\n![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== \"点击并拖拽以移动\")\n\n###   \n\n最后是进行预测与提交，代码在上面大家可以自己运行一下。\n\n**预测结果**\n\n排行榜：第24名 f1score 0.99780\n\n![](https://img-blog.csdnimg.cn/20190411225147290.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3hpYW9zb25nc2hpbmU=,size_16,color_FFFFFF,t_70)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== \"点击并拖拽以移动\")​\n\n  \n\n  \n\n  \n\n##   \n\n## **展望**\n\n此Baseline采用最简单的一维卷积达到了99.8%测试准确率，这体现了一维卷积在一维时序序列的应用效果。\n\nhope this helps\n\n> 个人网站--> [http://www.yansongsong.cn](http://www.yansongsong.cn/)\n> \n> 项目github地址：[https://github.com/xiaosongshine/bearing_detection_by_conv1d](https://github.com/xiaosongshine/bearing_detection_by_conv1d)\n\n**欢迎Fork+Star，觉得有用的话，麻烦小小鼓励一下 ><**\n"
  },
  {
    "path": "main.py",
    "content": "\nimport keras\nfrom scipy.io import loadmat\nimport matplotlib.pyplot as plt\nimport glob\nimport numpy as np\nimport pandas as pd\nimport math\nimport os\nfrom keras.layers import *\nfrom keras.models import *\nfrom keras.optimizers import *\nimport numpy as np\n\nMANIFEST_DIR = \"Bear_data/train.csv\"\nBatch_size = 20\nLong = 792\nLens = 640\n#把标签转成oneHot\ndef convert2oneHot(index,Lens):\n    hot = np.zeros((Lens,))\n    hot[int(index)] = 1\n    return(hot)\n\ndef xs_gen(path=MANIFEST_DIR,batch_size = Batch_size,train=True,Lens=Lens):\n\n    img_list = pd.read_csv(path)\n    if train:\n        img_list = np.array(img_list)[:Lens]\n        print(\"Found %s train items.\"%len(img_list))\n        print(\"list 1 is\",img_list[0,-1])\n        steps = math.ceil(len(img_list) / batch_size)    # 确定每轮有多少个batch\n    else:\n        img_list = np.array(img_list)[Lens:]\n        print(\"Found %s test items.\"%len(img_list))\n        print(\"list 1 is\",img_list[0,-1])\n        steps = math.ceil(len(img_list) / batch_size)    # 确定每轮有多少个batch\n    while True:\n        for i in range(steps):\n\n            batch_list = img_list[i * batch_size : i * batch_size + batch_size]\n            np.random.shuffle(batch_list)\n            batch_x = np.array([file for file in batch_list[:,1:-1]])\n            batch_y = np.array([convert2oneHot(label,10) for label in batch_list[:,-1]])\n\n            yield batch_x, batch_y\n\nTEST_MANIFEST_DIR = \"Bear_data/test_data.csv\"\n\ndef ts_gen(path=TEST_MANIFEST_DIR,batch_size = Batch_size):\n\n    img_list = pd.read_csv(path)\n\n    img_list = np.array(img_list)[:Lens]\n    print(\"Found %s train items.\"%len(img_list))\n    print(\"list 1 is\",img_list[0,-1])\n    steps = math.ceil(len(img_list) / batch_size)    # 确定每轮有多少个batch\n    while True:\n        for i in range(steps):\n\n            batch_list = img_list[i * batch_size : i * batch_size + batch_size]\n            #np.random.shuffle(batch_list)\n            batch_x = np.array([file for file in batch_list[:,1:]])\n            #batch_y = np.array([convert2oneHot(label,10) for label in batch_list[:,-1]])\n\n            yield batch_x\n\n\n\nTIME_PERIODS = 6000\ndef build_model(input_shape=(TIME_PERIODS,),num_classes=10):\n    model = Sequential()\n    model.add(Reshape((TIME_PERIODS, 1), input_shape=input_shape))\n    model.add(Conv1D(16, 8,strides=2, activation='relu',input_shape=(TIME_PERIODS,1)))\n\n    model.add(Conv1D(16, 8,strides=2, activation='relu',padding=\"same\"))\n    model.add(MaxPooling1D(2))\n\n    model.add(Conv1D(64, 4,strides=2, activation='relu',padding=\"same\"))\n    model.add(Conv1D(64, 4,strides=2, activation='relu',padding=\"same\"))\n    model.add(MaxPooling1D(2))\n    model.add(Conv1D(256, 4,strides=2, activation='relu',padding=\"same\"))\n    model.add(Conv1D(256, 4,strides=2, activation='relu',padding=\"same\"))\n    model.add(MaxPooling1D(2))\n    model.add(Conv1D(512, 2,strides=1, activation='relu',padding=\"same\"))\n    model.add(Conv1D(512, 2,strides=1, activation='relu',padding=\"same\"))\n    model.add(MaxPooling1D(2))\n    \"\"\"model.add(Flatten())\n    model.add(Dropout(0.3))\n    model.add(Dense(256, activation='relu'))\"\"\"\n    model.add(GlobalAveragePooling1D())\n    model.add(Dropout(0.3))\n    model.add(Dense(num_classes, activation='softmax'))\n    return(model)\n\nTrain = True\n\nif __name__ == \"__main__\":\n    if Train == True:\n        train_iter = xs_gen()\n        val_iter = xs_gen(train=False)\n\n        ckpt = keras.callbacks.ModelCheckpoint(\n            filepath='best_model.{epoch:02d}-{val_loss:.4f}.h5',\n            monitor='val_loss', save_best_only=True,verbose=1)\n\n        model = build_model()\n        opt = Adam(0.0002)\n        model.compile(loss='categorical_crossentropy',\n                    optimizer=opt, metrics=['accuracy'])\n        print(model.summary())\n\n        model.fit_generator(\n            generator=train_iter,\n            steps_per_epoch=Lens//Batch_size,\n            epochs=50,\n            initial_epoch=0,\n            validation_data = val_iter,\n            nb_val_samples = (Long - Lens)//Batch_size,\n            callbacks=[ckpt],\n            )\n        model.save(\"finishModel.h5\")\n    else:\n        test_iter = ts_gen()\n        model = load_model(\"best_model.49-0.00.h5\")\n        pres = model.predict_generator(generator=test_iter,steps=math.ceil(528/Batch_size),verbose=1)\n        print(pres.shape)\n        ohpres = np.argmax(pres,axis=1)\n        print(ohpres.shape)\n        #img_list = pd.read_csv(TEST_MANIFEST_DIR)\n        df = pd.DataFrame()\n        df[\"id\"] = np.arange(1,len(ohpres)+1)\n        df[\"label\"] = ohpres\n        df.to_csv(\"submmit.csv\",index=None)\n        test_iter = ts_gen()\n        for x in test_iter:\n            x1 = x[0]\n            break\n        plt.plot(x1)\n        plt.show()\n\n"
  },
  {
    "path": "submmit.csv",
    "content": "id,label\n1,9\n2,7\n3,9\n4,0\n5,1\n6,7\n7,4\n8,7\n9,0\n10,3\n11,9\n12,0\n13,7\n14,0\n15,0\n16,7\n17,8\n18,9\n19,7\n20,0\n21,7\n22,2\n23,7\n24,5\n25,8\n26,9\n27,4\n28,0\n29,1\n30,8\n31,3\n32,7\n33,1\n34,0\n35,9\n36,0\n37,1\n38,4\n39,6\n40,9\n41,9\n42,3\n43,0\n44,7\n45,9\n46,8\n47,5\n48,9\n49,7\n50,8\n51,1\n52,7\n53,0\n54,7\n55,0\n56,2\n57,9\n58,9\n59,2\n60,5\n61,5\n62,7\n63,3\n64,6\n65,8\n66,6\n67,6\n68,9\n69,1\n70,5\n71,9\n72,1\n73,0\n74,3\n75,7\n76,9\n77,8\n78,0\n79,0\n80,8\n81,7\n82,2\n83,0\n84,0\n85,1\n86,1\n87,7\n88,4\n89,9\n90,0\n91,7\n92,0\n93,6\n94,7\n95,5\n96,8\n97,0\n98,7\n99,9\n100,0\n101,0\n102,9\n103,6\n104,3\n105,9\n106,3\n107,5\n108,0\n109,2\n110,7\n111,8\n112,7\n113,0\n114,9\n115,4\n116,9\n117,9\n118,1\n119,0\n120,9\n121,9\n122,2\n123,7\n124,7\n125,7\n126,2\n127,2\n128,7\n129,0\n130,4\n131,2\n132,3\n133,9\n134,7\n135,3\n136,8\n137,0\n138,0\n139,3\n140,0\n141,7\n142,5\n143,4\n144,9\n145,5\n146,0\n147,9\n148,7\n149,2\n150,4\n151,2\n152,9\n153,7\n154,0\n155,7\n156,2\n157,7\n158,0\n159,4\n160,2\n161,1\n162,0\n163,0\n164,2\n165,0\n166,7\n167,7\n168,6\n169,5\n170,1\n171,9\n172,0\n173,1\n174,4\n175,0\n176,7\n177,3\n178,9\n179,4\n180,8\n181,7\n182,9\n183,1\n184,2\n185,7\n186,9\n187,8\n188,0\n189,9\n190,9\n191,4\n192,7\n193,0\n194,7\n195,7\n196,5\n197,0\n198,7\n199,9\n200,1\n201,9\n202,0\n203,7\n204,6\n205,7\n206,7\n207,5\n208,0\n209,7\n210,8\n211,9\n212,4\n213,0\n214,9\n215,0\n216,9\n217,4\n218,1\n219,0\n220,9\n221,9\n222,4\n223,0\n224,7\n225,8\n226,9\n227,6\n228,9\n229,1\n230,3\n231,7\n232,3\n233,9\n234,0\n235,1\n236,3\n237,0\n238,7\n239,7\n240,0\n241,0\n242,1\n243,5\n244,8\n245,8\n246,9\n247,0\n248,2\n249,9\n250,2\n251,0\n252,6\n253,0\n254,9\n255,7\n256,6\n257,7\n258,0\n259,7\n260,7\n261,4\n262,0\n263,7\n264,0\n265,3\n266,9\n267,3\n268,9\n269,1\n270,2\n271,9\n272,7\n273,7\n274,6\n275,0\n276,9\n277,2\n278,5\n279,2\n280,7\n281,0\n282,5\n283,9\n284,7\n285,9\n286,9\n287,9\n288,3\n289,9\n290,7\n291,0\n292,2\n293,6\n294,4\n295,0\n296,0\n297,9\n298,9\n299,7\n300,7\n301,7\n302,4\n303,1\n304,3\n305,0\n306,8\n307,0\n308,7\n309,7\n310,8\n311,0\n312,7\n313,2\n314,7\n315,5\n316,9\n317,8\n318,7\n319,6\n320,4\n321,2\n322,8\n323,0\n324,6\n325,7\n326,9\n327,6\n328,6\n329,9\n330,0\n331,0\n332,8\n333,0\n334,7\n335,7\n336,0\n337,0\n338,0\n339,9\n340,0\n341,0\n342,9\n343,2\n344,7\n345,6\n346,3\n347,1\n348,4\n349,1\n350,7\n351,5\n352,0\n353,6\n354,2\n355,3\n356,9\n357,5\n358,6\n359,2\n360,1\n361,2\n362,9\n363,7\n364,7\n365,9\n366,7\n367,7\n368,7\n369,5\n370,9\n371,2\n372,9\n373,7\n374,9\n375,2\n376,0\n377,5\n378,2\n379,6\n380,3\n381,0\n382,9\n383,4\n384,7\n385,5\n386,3\n387,5\n388,8\n389,9\n390,6\n391,0\n392,8\n393,5\n394,7\n395,0\n396,1\n397,4\n398,5\n399,8\n400,9\n401,9\n402,8\n403,3\n404,5\n405,7\n406,0\n407,0\n408,3\n409,9\n410,8\n411,0\n412,9\n413,6\n414,0\n415,1\n416,0\n417,7\n418,4\n419,8\n420,0\n421,9\n422,0\n423,0\n424,1\n425,0\n426,5\n427,9\n428,7\n429,3\n430,8\n431,0\n432,9\n433,7\n434,1\n435,3\n436,9\n437,9\n438,7\n439,6\n440,9\n441,9\n442,7\n443,9\n444,7\n445,5\n446,0\n447,7\n448,6\n449,8\n450,9\n451,5\n452,0\n453,0\n454,8\n455,7\n456,0\n457,1\n458,8\n459,9\n460,2\n461,8\n462,0\n463,0\n464,1\n465,0\n466,9\n467,4\n468,4\n469,6\n470,7\n471,0\n472,1\n473,4\n474,4\n475,0\n476,5\n477,9\n478,3\n479,7\n480,9\n481,9\n482,7\n483,9\n484,9\n485,0\n486,5\n487,2\n488,7\n489,7\n490,7\n491,0\n492,5\n493,2\n494,7\n495,9\n496,7\n497,5\n498,5\n499,2\n500,7\n501,9\n502,2\n503,9\n504,8\n505,7\n506,1\n507,0\n508,8\n509,7\n510,4\n511,9\n512,6\n513,9\n514,2\n515,0\n516,3\n517,9\n518,1\n519,4\n520,0\n521,7\n522,1\n523,9\n524,9\n525,3\n526,6\n527,2\n528,0\n"
  }
]