[
  {
    "path": ".gitattributes",
    "content": "*.pxd\t\ttext diff=python\n*.py \t\ttext diff=python\n*.py3 \t\ttext diff=python\n*.pyw \t\ttext diff=python\n*.pyx  \t\ttext diff=python\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "content": "---\nname: Bug report\nabout: Create a report to help us improve\n\n---\n\n**Describe the bug**\nA clear and concise description of what the bug is.\n\n**To Reproduce**\nSteps to reproduce the behavior:\n1. Go to '...'\n2. Click on '....'\n3. Scroll down to '....'\n4. See error\n\n**Expected behavior**\nA clear and concise description of what you expected to happen.\n\n**Screenshots**\nIf applicable, add screenshots to help explain your problem.\n\n**Desktop (please complete the following information):**\n - OS: [e.g. iOS]\n - Browser [e.g. chrome, safari]\n - Version [e.g. 22]\n\n**Smartphone (please complete the following information):**\n - Device: [e.g. iPhone6]\n - OS: [e.g. iOS8.1]\n - Browser [e.g. stock browser, safari]\n - Version [e.g. 22]\n\n**Additional context**\nAdd any other context about the problem here.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: Feature request\nabout: Suggest an idea for this project\n\n---\n\n**Is your feature request related to a problem? Please describe.**\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\n\n**Describe the solution you'd like**\nA clear and concise description of what you want to happen.\n\n**Describe alternatives you've considered**\nA clear and concise description of any alternative solutions or features you've considered.\n\n**Additional context**\nAdd any other context or screenshots about the feature request here.\n"
  },
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nIn the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.\n\n## Our Standards\n\nExamples of behavior that contributes to creating a positive environment include:\n\n* Using welcoming and inclusive language\n* Being respectful of differing viewpoints and experiences\n* Gracefully accepting constructive criticism\n* Focusing on what is best for the community\n* Showing empathy towards other community members\n\nExamples of unacceptable behavior by participants include:\n\n* The use of sexualized language or imagery and unwelcome sexual attention or advances\n* Trolling, insulting/derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or electronic address, without explicit permission\n* Other conduct which could reasonably be considered inappropriate in a professional setting\n\n## Our Responsibilities\n\nProject maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.\n\nProject maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.\n\n## Scope\n\nThis Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at avikjain02@gmail.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.\n\nProject maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]\n\n[homepage]: http://contributor-covenant.org\n[version]: http://contributor-covenant.org/version/1/4/\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "## Contributing\nWhen contributing to this repository, please first discuss the change you wish to make via issue, email, or any other method with the owners of this repository before making a change.\n\nPlease note we have a code of conduct, please follow it in all your interactions with the project.\n"
  },
  {
    "path": "Code/Day 11 K-NN.md",
    "content": "# K-Nearest Neighbors (K-NN)\n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%207.jpg\">\n</p>\n\n## The DataSet | Social Network \n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Other%20Docs/data.PNG\">\n</p> \n\n\n## Importing the libraries\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n```\n\n## Importing the dataset\n```python\ndataset = pd.read_csv('Social_Network_Ads.csv')\nX = dataset.iloc[:, [2, 3]].values\ny = dataset.iloc[:, 4].values\n```\n\n## Splitting the dataset into the Training set and Test set\n```python\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)\n```\n## Feature Scaling\n```python\nfrom sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)\n```\n## Fitting K-NN to the Training set\n```python\nfrom sklearn.neighbors import KNeighborsClassifier\nclassifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)\nclassifier.fit(X_train, y_train)\n```\n## Predicting the Test set results\n```python\ny_pred = classifier.predict(X_test)\n```\n\n## Making the Confusion Matrix\n```python\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred)\n```\n"
  },
  {
    "path": "Code/Day 13 SVM.md",
    "content": "# Day 13 | Support Vector Machine (SVM)\n\n## Importing the libraries\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n```\n\n## Importing the dataset\n```python\ndataset = pd.read_csv('Social_Network_Ads.csv')\nX = dataset.iloc[:, [2, 3]].values\ny = dataset.iloc[:, 4].values\n```\n\n## Splitting the dataset into the Training set and Test set\n```python\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)\n```\n\n## Feature Scaling\n```python\nfrom sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.fit_transform(X_test)\n```\n\n## Fitting SVM to the Training set\n```python\nfrom sklearn.svm import SVC\nclassifier = SVC(kernel = 'linear', random_state = 0)\nclassifier.fit(X_train, y_train)\n```\n## Predicting the Test set results\n```python\ny_pred = classifier.predict(X_test)\n```\n\n## Making the Confusion Matrix\n```python\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred)\n```\n\n## Visualising the Training set results\n\n```python\nfrom matplotlib.colors import ListedColormap\nX_set, y_set = X_train, y_train\nX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),\n                     np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))\nplt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),\n             alpha = 0.75, cmap = ListedColormap(('red', 'green')))\nplt.xlim(X1.min(), X1.max())\nplt.ylim(X2.min(), X2.max())\nfor i, j in enumerate(np.unique(y_set)):\n    plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],\n                c = ListedColormap(('red', 'green'))(i), label = j)\nplt.title('SVM (Training set)')\nplt.xlabel('Age')\nplt.ylabel('Estimated Salary')\nplt.legend()\nplt.show()\n```\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Other%20Docs/ets.png\">\n</p>\n\n## Visualising the Test set results\n```python\nfrom matplotlib.colors import ListedColormap\nX_set, y_set = X_test, y_test\nX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),\n                     np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))\nplt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),\n             alpha = 0.75, cmap = ListedColormap(('red', 'green')))\nplt.xlim(X1.min(), X1.max())\nplt.ylim(X2.min(), X2.max())\nfor i, j in enumerate(np.unique(y_set)):\n    plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],\n                c = ListedColormap(('red', 'green'))(i), label = j)\nplt.title('SVM (Test set)')\nplt.xlabel('Age')\nplt.ylabel('Estimated Salary')\nplt.legend()\nplt.show()\n```\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Other%20Docs/test.png\">\n</p>\n"
  },
  {
    "path": "Code/Day 1_Data PreProcessing.md",
    "content": "# Data PreProcessing\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%201.jpg\">\n</p>\n\nAs shown in the infograph we will break down data preprocessing in 6 essential steps.\nGet the dataset from [here](https://github.com/Avik-Jain/100-Days-Of-ML-Code/tree/master/datasets) that is used in this example\n\n## Step 1: Importing the libraries\n```Python\nimport numpy as np\nimport pandas as pd\n```\n## Step 2: Importing dataset\n```python\ndataset = pd.read_csv('Data.csv')\nX = dataset.iloc[ : , :-1].values\nY = dataset.iloc[ : , 3].values\n```\n## Step 3: Handling the missing data\n```python\nfrom sklearn.preprocessing import Imputer\nimputer = Imputer(missing_values = \"NaN\", strategy = \"mean\", axis = 0)\nimputer = imputer.fit(X[ : , 1:3])\nX[ : , 1:3] = imputer.transform(X[ : , 1:3])\n```\n## Step 4: Encoding categorical data\n```python\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder\nlabelencoder_X = LabelEncoder()\nX[ : , 0] = labelencoder_X.fit_transform(X[ : , 0])\n```\n### Creating a dummy variable\n```python\nonehotencoder = OneHotEncoder(categorical_features = [0])\nX = onehotencoder.fit_transform(X).toarray()\nlabelencoder_Y = LabelEncoder()\nY =  labelencoder_Y.fit_transform(Y)\n```\n## Step 5: Splitting the datasets into training sets and Test sets \n```python\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, Y_train, Y_test = train_test_split( X , Y , test_size = 0.2, random_state = 0)\n```\n\n## Step 6: Feature Scaling\n```python\nfrom sklearn.preprocessing import StandardScaler\nsc_X = StandardScaler()\nX_train = sc_X.fit_transform(X_train)\nX_test = sc_X.fit_transform(X_test)\n```\n### Done :smile:\n"
  },
  {
    "path": "Code/Day 25 Decision Tree.md",
    "content": "# Decision Tree Classification\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%2023.jpg\">\n</p>\n\n### Importing the libraries\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n```\n\n### Importing the dataset\n```python\ndataset = pd.read_csv('Social_Network_Ads.csv')\nX = dataset.iloc[:, [2, 3]].values\ny = dataset.iloc[:, 4].values\n```\n### Splitting the dataset into the Training set and Test set\n```python\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)\n```\n\n### Feature Scaling\n```python\nfrom sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)\n```\n### Fitting Decision Tree Classification to the Training set\n```python\nfrom sklearn.tree import DecisionTreeClassifier\nclassifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)\nclassifier.fit(X_train, y_train)\n```\n### Predicting the Test set results\n```python\ny_pred = classifier.predict(X_test)\n```\n### Making the Confusion Matrix\n```python\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred)\n```\n### Visualising the Training set results\n```python\nfrom matplotlib.colors import ListedColormap\nX_set, y_set = X_train, y_train\nX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),\n                     np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))\nplt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),\n             alpha = 0.75, cmap = ListedColormap(('red', 'green')))\nplt.xlim(X1.min(), X1.max())\nplt.ylim(X2.min(), X2.max())\nfor i, j in enumerate(np.unique(y_set)):\n    plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],\n                c = ListedColormap(('red', 'green'))(i), label = j)\nplt.title('Decision Tree Classification (Training set)')\nplt.xlabel('Age')\nplt.ylabel('Estimated Salary')\nplt.legend()\nplt.show()\n```\n### Visualising the Test set results\n```python\nfrom matplotlib.colors import ListedColormap\nX_set, y_set = X_test, y_test\nX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),\n                     np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))\nplt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),\n             alpha = 0.75, cmap = ListedColormap(('red', 'green')))\nplt.xlim(X1.min(), X1.max())\nplt.ylim(X2.min(), X2.max())\nfor i, j in enumerate(np.unique(y_set)):\n    plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],\n                c = ListedColormap(('red', 'green'))(i), label = j)\nplt.title('Decision Tree Classification (Test set)')\nplt.xlabel('Age')\nplt.ylabel('Estimated Salary')\nplt.legend()\nplt.show()\n```\n"
  },
  {
    "path": "Code/Day 34 Random_Forest.md",
    "content": "# Random Forests\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%2033.jpg\">\n</p>\n\n\n### Importing the libraries\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n```\n\n### Importing the dataset\n```python\ndataset = pd.read_csv('Social_Network_Ads.csv')\nX = dataset.iloc[:, [2, 3]].values\ny = dataset.iloc[:, 4].values\n```\n### Splitting the dataset into the Training set and Test set\n```python\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)\n```\n\n### Feature Scaling\n```python\nfrom sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)\n```\n### Fitting Random Forest to the Training set\n```python\nfrom sklearn.ensemble import RandomForestClassifier\nclassifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)\nclassifier.fit(X_train, y_train)\n```\n### Predicting the Test set results\n```python\ny_pred = classifier.predict(X_test)\n```\n### Making the Confusion Matrix\n```python\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred)\n```\n### Visualising the Training set results\n```python\nfrom matplotlib.colors import ListedColormap\nX_set, y_set = X_train, y_train\nX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),\n                     np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))\nplt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),\n             alpha = 0.75, cmap = ListedColormap(('red', 'green')))\nplt.xlim(X1.min(), X1.max())\nplt.ylim(X2.min(), X2.max())\nfor i, j in enumerate(np.unique(y_set)):\n    plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],\n                c = ListedColormap(('red', 'green'))(i), label = j)\nplt.title('Random Forest Classification (Training set)')\nplt.xlabel('Age')\nplt.ylabel('Estimated Salary')\nplt.legend()\nplt.show()\n```\n### Visualising the Test set results\n```python\nfrom matplotlib.colors import ListedColormap\nX_set, y_set = X_test, y_test\nX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),\n                     np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))\nplt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),\n             alpha = 0.75, cmap = ListedColormap(('red', 'green')))\nplt.xlim(X1.min(), X1.max())\nplt.ylim(X2.min(), X2.max())\nfor i, j in enumerate(np.unique(y_set)):\n    plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],\n                c = ListedColormap(('red', 'green'))(i), label = j)\nplt.title('Random Forest Classification (Test set)')\nplt.xlabel('Age')\nplt.ylabel('Estimated Salary')\nplt.legend()\nplt.show()\n```\n"
  },
  {
    "path": "Code/Day 6 Logistic Regression.md",
    "content": "# Logistic Regression\n\n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%204.jpg\">\n</p>\n\n## The DataSet | Social Network \n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Other%20Docs/data.PNG\">\n</p> \n\nThis dataset contains information of users in a social network. Those informations are the user id the gender the age and the estimated salary. A car company has just launched their brand new luxury SUV. And we're trying to see which of these users of the social network are going to buy this brand new SUV And the last column here tells If yes or no the user bought this SUV we are going to build a model that is going to predict if a user is going to buy or not the SUV based on two variables which are going to be the age and the estimated salary. So our matrix of feature is only going to be these two columns.\nWe want to find some correlations between the age and the estimated salary of a user and his decision to purchase yes or no the SUV.\n\n## Step 1 | Data Pre-Processing\n\n### Importing the Libraries\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n```\n### Importing the dataset\n\nGet the dataset from [here](https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/datasets/Social_Network_Ads.csv)\n```python\ndataset = pd.read_csv('Social_Network_Ads.csv')\nX = dataset.iloc[:, [2, 3]].values\ny = dataset.iloc[:, 4].values\n```\n\n### Splitting the dataset into the Training set and Test set\n\n```python\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)\n```\n\n### Feature Scaling\n\n```python\nfrom sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)\n```\n## Step 2 | Logistic Regression Model\n\nThe library for this job which is going to be the linear model library and it is called linear because the logistic regression is a linear classifier which means that here since we're in two dimensions, our two categories of users are going to be separated by a straight line. Then import the logistic regression class.\nNext we will create a new object from this class which is going to be our classifier that we are going to fit on our training set.\n\n### Fitting Logistic Regression to the Training set\n\n```python\nfrom sklearn.linear_model import LogisticRegression\nclassifier = LogisticRegression()\nclassifier.fit(X_train, y_train)\n```\n## Step 3 | Predection\n\n### Predicting the Test set results\n\n```python\ny_pred = classifier.predict(X_test)\n```\n\n## Step 4 | Evaluating The Predection\n\nWe predicted the test results and now we will evaluate if our logistic regression model learned and understood correctly.\nSo this confusion matrix is going to contain the correct predictions that our model made on the set as well as the incorrect predictions.\n\n### Making the Confusion Matrix\n\n```python\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred)\n```\n\n## Visualization\n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Other%20Docs/training.png\">\n</p> \n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Other%20Docs/testing.png\">\n</p> \n"
  },
  {
    "path": "Code/Day2_Simple_Linear_Regression.md",
    "content": "# Simple Linear Regression\n\n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%202.jpg\">\n</p>\n\n\n# Step 1: Data Preprocessing\n```python\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndataset = pd.read_csv('studentscores.csv')\nX = dataset.iloc[ : ,   : 1 ].values\nY = dataset.iloc[ : , 1 ].values\n\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, Y_train, Y_test = train_test_split( X, Y, test_size = 1/4, random_state = 0) \n```\n\n# Step 2: Fitting Simple Linear Regression Model to the training set\n ```python\n from sklearn.linear_model import LinearRegression\n regressor = LinearRegression()\n regressor = regressor.fit(X_train, Y_train)\n ```\n # Step 3: Predecting the Result\n ```python\n Y_pred = regressor.predict(X_test)\n ```\n \n # Step 4: Visualization \n ## Visualising the Training results\n ```python\n plt.scatter(X_train , Y_train, color = 'red')\n plt.plot(X_train , regressor.predict(X_train), color ='blue')\n ```\n ## Visualizing the test results\n ```python\n plt.scatter(X_test , Y_test, color = 'red')\n plt.plot(X_test , regressor.predict(X_test), color ='blue')\n ``` \n"
  },
  {
    "path": "Code/Day3_Multiple_Linear_Regression.md",
    "content": "# Multiple Linear Regression\n\n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%203.jpg\">\n</p>\n\n\n## Step 1: Data Preprocessing\n\n### Importing the libraries\n```python\nimport pandas as pd\nimport numpy as np\n```\n### Importing the dataset\n```python\ndataset = pd.read_csv('50_Startups.csv')\nX = dataset.iloc[ : , :-1].values\nY = dataset.iloc[ : ,  4 ].values\n```\n\n### Encoding Categorical data\n```python\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder\nlabelencoder = LabelEncoder()\nX[: , 3] = labelencoder.fit_transform(X[ : , 3])\nonehotencoder = OneHotEncoder(categorical_features = [3])\nX = onehotencoder.fit_transform(X).toarray()\n```\n\n### Avoiding Dummy Variable Trap\n```python\nX = X[: , 1:]\n```\n\n### Splitting the dataset into the Training set and Test set\n```python\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state = 0)\n```\n## Step 2: Fitting Multiple Linear Regression to the Training set\n```python\nfrom sklearn.linear_model import LinearRegression\nregressor = LinearRegression()\nregressor.fit(X_train, Y_train)\n```\n\n## Step 3: Predicting the Test set results\n```python\ny_pred = regressor.predict(X_test)\n```\n"
  },
  {
    "path": "Info-graphs/readme.md",
    "content": "Each Day Infograph\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 Avik Jain\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "Other Docs/readme.md",
    "content": "Images for representation\n"
  },
  {
    "path": "README.md",
    "content": "# 100-Days-Of-ML-Code\n\n100 Days of Machine Learning Coding as proposed by [Siraj Raval](https://github.com/llSourcell)\n\nGet the datasets from [here](https://github.com/Avik-Jain/100-Days-Of-ML-Code/tree/master/datasets)\n\n## Data PreProcessing | Day 1\nCheck out the code from [here](https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Code/Day%201_Data%20PreProcessing.md).\n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%201.jpg\">\n</p>\n\n## Simple Linear Regression | Day 2\nCheck out the code from [here](https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Code/Day2_Simple_Linear_Regression.md).\n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%202.jpg\">\n</p>\n\n## Multiple Linear Regression | Day 3\nCheck out the code from [here](https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Code/Day3_Multiple_Linear_Regression.md).\n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%203.jpg\">\n</p>\n\n## Logistic Regression | Day 4\n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%204.jpg\">\n</p>\n\n## Logistic Regression | Day 5\nMoving forward into #100DaysOfMLCode today I dived into the deeper depth of what Logistic Regression actually is and what is the math involved behind it. Learned how cost function is calculated and then how to apply gradient descent algorithm to cost function to minimize the error in prediction.  \nDue to less time I will now be posting an infographic on alternate days.\nAlso if someone wants to help me out in documentaion of code and already has some experince in the field and knows Markdown for github please contact me on LinkedIn :) .\n\n## Implementing Logistic Regression | Day 6\nCheck out the Code [here](https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Code/Day%206%20Logistic%20Regression.md)\n\n## K Nearest Neighbours | Day 7\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%207.jpg\">\n</p>\n\n## Math Behind Logistic Regression | Day 8 \n\n#100DaysOfMLCode To clear my insights on logistic regression I was searching on the internet for some resource or article and I came across this article (https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc) by Saishruthi Swaminathan. \n\nIt gives a detailed description of Logistic Regression. Do check it out.\n\n## Support Vector Machines | Day 9\nGot an intution on what SVM is and how it is used to solve Classification problem.\n\n## SVM and KNN | Day 10\nLearned more about how SVM works and implementing the K-NN algorithm.\n\n## Implementation of K-NN | Day 11  \n\nImplemented the K-NN algorithm for classification. #100DaysOfMLCode \nSupport Vector Machine Infographic is halfway complete. Will update it tomorrow.\n\n## Support Vector Machines | Day 12\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%2012.jpg\">\n</p>\n\n## Naive Bayes Classifier | Day 13\n\nContinuing with #100DaysOfMLCode today I went through the Naive Bayes classifier.\nI am also implementing the SVM in python using scikit-learn. Will update the code soon.\n\n## Implementation of SVM | Day 14\nToday I implemented SVM on linearly related data. Used Scikit-Learn library. In Scikit-Learn we have SVC classifier which we use to achieve this task. Will be using kernel-trick on next implementation.\nCheck the code [here](https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Code/Day%2013%20SVM.md).\n\n## Naive Bayes Classifier and Black Box Machine Learning | Day 15\nLearned about different types of naive bayes classifiers. Also started the lectures by [Bloomberg](https://bloomberg.github.io/foml/#home). First one in the playlist was Black Box Machine Learning. It gives the whole overview about prediction functions, feature extraction, learning algorithms, performance evaluation, cross-validation, sample bias, nonstationarity, overfitting, and hyperparameter tuning.\n\n## Implemented SVM using Kernel Trick | Day 16\nUsing Scikit-Learn library implemented SVM algorithm along with kernel function which maps our data points into higher dimension to find optimal hyperplane. \n\n## Started Deep learning Specialization on Coursera | Day 17\nCompleted the whole Week 1 and Week 2 on a single day. Learned Logistic regression as Neural Network. \n\n## Deep learning Specialization on Coursera | Day 18\nCompleted the Course 1 of the deep learning specialization. Implemented a neural net in python.\n\n## The Learning Problem , Professor Yaser Abu-Mostafa | Day 19\nStarted Lecture 1 of 18 of Caltech's Machine Learning Course - CS 156 by Professor Yaser Abu-Mostafa. It was basically an introduction to the upcoming lectures. He also explained Perceptron Algorithm.\n\n## Started Deep learning Specialization Course 2 | Day 20\nCompleted the Week 1 of Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization.\n\n## Web Scraping | Day 21\nWatched some tutorials on how to do web scraping using Beautiful Soup in order to collect data for building a model.\n\n## Is Learning Feasible? | Day 22\nLecture 2 of 18 of Caltech's Machine Learning Course - CS 156 by Professor Yaser Abu-Mostafa. Learned about Hoeffding Inequality.\n\n## Decision Trees | Day 23\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%2023.jpg\">\n</p>\n\n## Introduction To Statistical Learning Theory | Day 24\nLec 3 of Bloomberg ML course introduced some of the core concepts like input space, action space, outcome space, prediction functions, loss functions, and hypothesis spaces.\n\n## Implementing Decision Trees | Day 25\nCheck the code [here.](https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Code/Day%2025%20Decision%20Tree.md)\n\n## Jumped To Brush up Linear Algebra | Day 26\nFound an amazing [channel](https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw) on youtube 3Blue1Brown. It has a playlist called Essence of Linear Algebra. Started off by completing 4 videos which gave a complete overview of Vectors, Linear Combinations, Spans, Basis Vectors, Linear Transformations and Matrix Multiplication. \n\nLink to the playlist [here.](https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab)\n\n## Jumped To Brush up Linear Algebra | Day 27\nContinuing with the playlist completed next 4 videos discussing topics 3D Transformations, Determinants, Inverse Matrix, Column Space, Null Space and Non-Square Matrices.\n\nLink to the playlist [here.](https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab)\n\n## Jumped To Brush up Linear Algebra | Day 28\nIn the playlist of 3Blue1Brown completed another 3 videos from the essence of linear algebra. \nTopics covered were Dot Product and Cross Product.\n\nLink to the playlist [here.](https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab)\n\n\n## Jumped To Brush up Linear Algebra | Day 29\nCompleted the whole playlist today, videos 12-14. Really an amazing playlist to refresh the concepts of Linear Algebra.\nTopics covered were the change of basis, Eigenvectors and Eigenvalues, and Abstract Vector Spaces.\n\nLink to the playlist [here.](https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab)\n\n## Essence of calculus | Day 30\nCompleting the playlist - Essence of Linear Algebra by 3blue1brown a suggestion popped up by youtube regarding a series of videos again by the same channel 3Blue1Brown. Being already impressed by the previous series on Linear algebra I dived straight into it.\nCompleted about 5 videos on topics such as Derivatives, Chain Rule, Product Rule, and derivative of exponential.\n\nLink to the playlist [here.](https://www.youtube.com/playlist?list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr)\n\n## Essence of calculus | Day 31\nWatched 2 Videos on topic Implicit Diffrentiation and Limits from the playlist Essence of Calculus.\n\nLink to the playlist [here.](https://www.youtube.com/playlist?list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr)\n\n## Essence of calculus | Day 32\nWatched the remaining 4 videos covering topics Like Integration and Higher order derivatives.\n\nLink to the playlist [here.](https://www.youtube.com/playlist?list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr)\n\n## Random Forests | Day 33\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%2033.jpg\">\n</p>\n\n## Implementing Random Forests | Day 34\nCheck the code [here.](https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Code/Day%2034%20Random_Forest.md)\n\n## But what *is* a Neural Network? | Deep learning, chapter 1  | Day 35\nAn Amazing Video on neural networks by 3Blue1Brown youtube channel. This video gives a good understanding of Neural Networks and uses Handwritten digit dataset to explain the concept. \nLink To the [video.](https://www.youtube.com/watch?v=aircAruvnKk&t=7s)\n\n## Gradient descent, how neural networks learn | Deep learning, chapter 2 | Day 36\nPart two of neural networks by 3Blue1Brown youtube channel. This video explains the concepts of Gradient Descent in an interesting way. 169 must watch and highly recommended.\nLink To the [video.](https://www.youtube.com/watch?v=IHZwWFHWa-w)\n\n## What is backpropagation really doing? | Deep learning, chapter 3 | Day 37\nPart three of neural networks by 3Blue1Brown youtube channel. This video mostly discusses the partial derivatives and backpropagation.\nLink To the [video.](https://www.youtube.com/watch?v=Ilg3gGewQ5U)\n\n## Backpropagation calculus | Deep learning, chapter 4 | Day 38\nPart four of neural networks by 3Blue1Brown youtube channel. The goal here is to represent, in somewhat more formal terms, the intuition for how backpropagation works and the video moslty discusses the partial derivatives and backpropagation.\nLink To the [video.](https://www.youtube.com/watch?v=tIeHLnjs5U8)\n\n## Deep Learning with Python, TensorFlow, and Keras tutorial | Day 39\nLink To the [video.](https://www.youtube.com/watch?v=wQ8BIBpya2k&t=19s&index=2&list=PLQVvvaa0QuDfhTox0AjmQ6tvTgMBZBEXN)\n\n## Loading in your own data - Deep Learning basics with Python, TensorFlow and Keras p.2 | Day 40\nLink To the [video.](https://www.youtube.com/watch?v=j-3vuBynnOE&list=PLQVvvaa0QuDfhTox0AjmQ6tvTgMBZBEXN&index=2)\n\n## Convolutional Neural Networks - Deep Learning basics with Python, TensorFlow and Keras p.3 | Day 41\nLink To the [video.](https://www.youtube.com/watch?v=WvoLTXIjBYU&list=PLQVvvaa0QuDfhTox0AjmQ6tvTgMBZBEXN&index=3)\n\n## Analyzing Models with TensorBoard - Deep Learning with Python, TensorFlow and Keras p.4 | Day 42\nLink To the [video.](https://www.youtube.com/watch?v=BqgTU7_cBnk&list=PLQVvvaa0QuDfhTox0AjmQ6tvTgMBZBEXN&index=4)\n\n## K Means Clustering | Day 43\nMoved to Unsupervised Learning and studied about Clustering.\nWorking on my website check it out [avikjain.me](http://www.avikjain.me/)\nAlso found a wonderful animation that can help to easily understand K - Means Clustering [Link](http://shabal.in/visuals/kmeans/6.html)\n\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%2043.jpg\">\n</p>\n\n## K Means Clustering Implementation | Day 44\nImplemented K Means Clustering. Check the code [here.]()\n\n## Digging Deeper | NUMPY  | Day 45\nGot a new book \"Python Data Science HandBook\" by JK VanderPlas Check the Jupyter notebooks [here.](https://github.com/jakevdp/PythonDataScienceHandbook)\n<br>Started with chapter 2 : Introduction to Numpy. Covered topics like Data Types, Numpy arrays and Computations on Numpy arrays.\n<br>Check the code - \n<br>[Introduction to NumPy](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.00-Introduction-to-NumPy.ipynb)\n<br>[Understanding Data Types in Python](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.01-Understanding-Data-Types.ipynb)\n<br>[The Basics of NumPy Arrays](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.02-The-Basics-Of-NumPy-Arrays.ipynb)\n<br>[Computation on NumPy Arrays: Universal Functions](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.03-Computation-on-arrays-ufuncs.ipynb)\n\n## Digging Deeper | NUMPY | Day 46\nChapter 2 : Aggregations, Comparisions and Broadcasting\n<br>Link to Notebook:\n<br>[Aggregations: Min, Max, and Everything In Between](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.04-Computation-on-arrays-aggregates.ipynb)\n<br>[Computation on Arrays: Broadcasting](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.05-Computation-on-arrays-broadcasting.ipynb)\n<br>[Comparisons, Masks, and Boolean Logic](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.06-Boolean-Arrays-and-Masks.ipynb)\n\n## Digging Deeper | NUMPY | Day 47\nChapter 2 : Fancy Indexing, sorting arrays, Struchered Data\n<br>Link to Notebook:\n<br>[Fancy Indexing](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.07-Fancy-Indexing.ipynb)\n<br>[Sorting Arrays](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.08-Sorting.ipynb)\n<br>[Structured Data: NumPy's Structured Arrays](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.09-<br>Structured-Data-NumPy.ipynb)\n\n## Digging Deeper | PANDAS | Day 48\nChapter 3 : Data Manipulation with Pandas\n<br> Covered Various topics like Pandas Objects, Data Indexing and Selection, Operating on Data, Handling Missing Data, Hierarchical Indexing, ConCat and Append.\n<br>Link To the Notebooks:\n<br>[Data Manipulation with Pandas](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.00-Introduction-to-Pandas.ipynb)\n<br>[Introducing Pandas Objects](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.01-Introducing-Pandas-Objects.ipynb)\n<br>[Data Indexing and Selection](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.02-Data-Indexing-and-Selection.ipynb)\n<br>[Operating on Data in Pandas](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.03-Operations-in-Pandas.ipynb)\n<br>[Handling Missing Data](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.04-Missing-Values.ipynb)\n<br>[Hierarchical Indexing](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.05-Hierarchical-Indexing.ipynb)\n<br>[Combining Datasets: Concat and Append](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.06-Concat-And-Append.ipynb)\n\n## Digging Deeper | PANDAS | Day 49\nChapter 3: Completed following topics- Merge and Join, Aggregation and grouping and Pivot Tables.\n<br>[Combining Datasets: Merge and Join](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.07-Merge-and-Join.ipynb)\n<br>[Aggregation and Grouping](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.08-Aggregation-and-Grouping.ipynb)\n<br>[Pivot Tables](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.09-Pivot-Tables.ipynb)\n\n## Digging Deeper | PANDAS | Day 50\nChapter 3: Vectorized Strings Operations, Working with Time Series\n<br>Links to Notebooks:\n<br>[Vectorized String Operations](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.10-Working-With-Strings.ipynb)\n<br>[Working with Time Series](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.11-Working-with-Time-Series.ipynb)\n<br>[High-Performance Pandas: eval() and query()](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.12-Performance-Eval-and-Query.ipynb)\n\n## Digging Deeper | MATPLOTLIB | Day 51\nChapter 4: Visualization with Matplotlib \nLearned about Simple Line Plots, Simple Scatter Plotsand Density and Contour Plots.\n<br>Links to Notebooks: \n<br>[Visualization with Matplotlib](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.00-Introduction-To-Matplotlib.ipynb)\n<br>[Simple Line Plots](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.01-Simple-Line-Plots.ipynb)\n<br>[Simple Scatter Plots](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.02-Simple-Scatter-Plots.ipynb)\n<br>[Visualizing Errors](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.03-Errorbars.ipynb)\n<br>[Density and Contour Plots](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.04-Density-and-Contour-Plots.ipynb)\n\n## Digging Deeper | MATPLOTLIB | Day 52\nChapter 4: Visualization with Matplotlib \nLearned about Histograms, How to customize plot legends, colorbars, and buliding Multiple Subplots.\n<br>Links to Notebooks: \n<br>[Histograms, Binnings, and Density](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.05-Histograms-and-Binnings.ipynb)\n<br>[Customizing Plot Legends](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.06-Customizing-Legends.ipynb)\n<br>[Customizing Colorbars](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.07-Customizing-Colorbars.ipynb)\n<br>[Multiple Subplots](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.08-Multiple-Subplots.ipynb)\n<br>[Text and Annotation](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.09-Text-and-Annotation.ipynb)\n\n## Digging Deeper | MATPLOTLIB | Day 53\nChapter 4: Covered Three Dimensional Plotting in Mathplotlib.\n<br>Links to Notebooks:\n<br>[Three-Dimensional Plotting in Matplotlib](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.12-Three-Dimensional-Plotting.ipynb)\n\n## Hierarchical Clustering | Day 54\nStudied about Hierarchical Clustering.\nCheck out this amazing [Visualization.](https://cdn-images-1.medium.com/max/800/1*ET8kCcPpr893vNZFs8j4xg.gif)\n<p align=\"center\">\n  <img src=\"https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%2054.jpg\">\n</p>\n"
  },
  {
    "path": "_config.yml",
    "content": "theme: jekyll-theme-merlot"
  },
  {
    "path": "datasets/50_Startups.csv",
    "content": "R&D Spend,Administration,Marketing Spend,State,Profit\r\n165349.2,136897.8,471784.1,New York,192261.83\r\n162597.7,151377.59,443898.53,California,191792.06\r\n153441.51,101145.55,407934.54,Florida,191050.39\r\n144372.41,118671.85,383199.62,New York,182901.99\r\n142107.34,91391.77,366168.42,Florida,166187.94\r\n131876.9,99814.71,362861.36,New York,156991.12\r\n134615.46,147198.87,127716.82,California,156122.51\r\n130298.13,145530.06,323876.68,Florida,155752.6\r\n120542.52,148718.95,311613.29,New York,152211.77\r\n123334.88,108679.17,304981.62,California,149759.96\r\n101913.08,110594.11,229160.95,Florida,146121.95\r\n100671.96,91790.61,249744.55,California,144259.4\r\n93863.75,127320.38,249839.44,Florida,141585.52\r\n91992.39,135495.07,252664.93,California,134307.35\r\n119943.24,156547.42,256512.92,Florida,132602.65\r\n114523.61,122616.84,261776.23,New York,129917.04\r\n78013.11,121597.55,264346.06,California,126992.93\r\n94657.16,145077.58,282574.31,New York,125370.37\r\n91749.16,114175.79,294919.57,Florida,124266.9\r\n86419.7,153514.11,0,New York,122776.86\r\n76253.86,113867.3,298664.47,California,118474.03\r\n78389.47,153773.43,299737.29,New York,111313.02\r\n73994.56,122782.75,303319.26,Florida,110352.25\r\n67532.53,105751.03,304768.73,Florida,108733.99\r\n77044.01,99281.34,140574.81,New York,108552.04\r\n64664.71,139553.16,137962.62,California,107404.34\r\n75328.87,144135.98,134050.07,Florida,105733.54\r\n72107.6,127864.55,353183.81,New York,105008.31\r\n66051.52,182645.56,118148.2,Florida,103282.38\r\n65605.48,153032.06,107138.38,New York,101004.64\r\n61994.48,115641.28,91131.24,Florida,99937.59\r\n61136.38,152701.92,88218.23,New York,97483.56\r\n63408.86,129219.61,46085.25,California,97427.84\r\n55493.95,103057.49,214634.81,Florida,96778.92\r\n46426.07,157693.92,210797.67,California,96712.8\r\n46014.02,85047.44,205517.64,New York,96479.51\r\n28663.76,127056.21,201126.82,Florida,90708.19\r\n44069.95,51283.14,197029.42,California,89949.14\r\n20229.59,65947.93,185265.1,New York,81229.06\r\n38558.51,82982.09,174999.3,California,81005.76\r\n28754.33,118546.05,172795.67,California,78239.91\r\n27892.92,84710.77,164470.71,Florida,77798.83\r\n23640.93,96189.63,148001.11,California,71498.49\r\n15505.73,127382.3,35534.17,New York,69758.98\r\n22177.74,154806.14,28334.72,California,65200.33\r\n1000.23,124153.04,1903.93,New York,64926.08\r\n1315.46,115816.21,297114.46,Florida,49490.75\r\n0,135426.92,0,California,42559.73\r\n542.05,51743.15,0,New York,35673.41\r\n0,116983.8,45173.06,California,14681.4"
  },
  {
    "path": "datasets/Data.csv",
    "content": "Country,Age,Salary,Purchased\r\nFrance,44,72000,No\r\nSpain,27,48000,Yes\r\nGermany,30,54000,No\r\nSpain,38,61000,No\r\nGermany,40,,Yes\r\nFrance,35,58000,Yes\r\nSpain,,52000,No\r\nFrance,48,79000,Yes\r\nGermany,50,83000,No\r\nFrance,37,67000,Yes"
  },
  {
    "path": "datasets/Social_Network_Ads.csv",
    "content": "User ID,Gender,Age,EstimatedSalary,Purchased\r\n15624510,Male,19,19000,0\r\n15810944,Male,35,20000,0\r\n15668575,Female,26,43000,0\r\n15603246,Female,27,57000,0\r\n15804002,Male,19,76000,0\r\n15728773,Male,27,58000,0\r\n15598044,Female,27,84000,0\r\n15694829,Female,32,150000,1\r\n15600575,Male,25,33000,0\r\n15727311,Female,35,65000,0\r\n15570769,Female,26,80000,0\r\n15606274,Female,26,52000,0\r\n15746139,Male,20,86000,0\r\n15704987,Male,32,18000,0\r\n15628972,Male,18,82000,0\r\n15697686,Male,29,80000,0\r\n15733883,Male,47,25000,1\r\n15617482,Male,45,26000,1\r\n15704583,Male,46,28000,1\r\n15621083,Female,48,29000,1\r\n15649487,Male,45,22000,1\r\n15736760,Female,47,49000,1\r\n15714658,Male,48,41000,1\r\n15599081,Female,45,22000,1\r\n15705113,Male,46,23000,1\r\n15631159,Male,47,20000,1\r\n15792818,Male,49,28000,1\r\n15633531,Female,47,30000,1\r\n15744529,Male,29,43000,0\r\n15669656,Male,31,18000,0\r\n15581198,Male,31,74000,0\r\n15729054,Female,27,137000,1\r\n15573452,Female,21,16000,0\r\n15776733,Female,28,44000,0\r\n15724858,Male,27,90000,0\r\n15713144,Male,35,27000,0\r\n15690188,Female,33,28000,0\r\n15689425,Male,30,49000,0\r\n15671766,Female,26,72000,0\r\n15782806,Female,27,31000,0\r\n15764419,Female,27,17000,0\r\n15591915,Female,33,51000,0\r\n15772798,Male,35,108000,0\r\n15792008,Male,30,15000,0\r\n15715541,Female,28,84000,0\r\n15639277,Male,23,20000,0\r\n15798850,Male,25,79000,0\r\n15776348,Female,27,54000,0\r\n15727696,Male,30,135000,1\r\n15793813,Female,31,89000,0\r\n15694395,Female,24,32000,0\r\n15764195,Female,18,44000,0\r\n15744919,Female,29,83000,0\r\n15671655,Female,35,23000,0\r\n15654901,Female,27,58000,0\r\n15649136,Female,24,55000,0\r\n15775562,Female,23,48000,0\r\n15807481,Male,28,79000,0\r\n15642885,Male,22,18000,0\r\n15789109,Female,32,117000,0\r\n15814004,Male,27,20000,0\r\n15673619,Male,25,87000,0\r\n15595135,Female,23,66000,0\r\n15583681,Male,32,120000,1\r\n15605000,Female,59,83000,0\r\n15718071,Male,24,58000,0\r\n15679760,Male,24,19000,0\r\n15654574,Female,23,82000,0\r\n15577178,Female,22,63000,0\r\n15595324,Female,31,68000,0\r\n15756932,Male,25,80000,0\r\n15726358,Female,24,27000,0\r\n15595228,Female,20,23000,0\r\n15782530,Female,33,113000,0\r\n15592877,Male,32,18000,0\r\n15651983,Male,34,112000,1\r\n15746737,Male,18,52000,0\r\n15774179,Female,22,27000,0\r\n15667265,Female,28,87000,0\r\n15655123,Female,26,17000,0\r\n15595917,Male,30,80000,0\r\n15668385,Male,39,42000,0\r\n15709476,Male,20,49000,0\r\n15711218,Male,35,88000,0\r\n15798659,Female,30,62000,0\r\n15663939,Female,31,118000,1\r\n15694946,Male,24,55000,0\r\n15631912,Female,28,85000,0\r\n15768816,Male,26,81000,0\r\n15682268,Male,35,50000,0\r\n15684801,Male,22,81000,0\r\n15636428,Female,30,116000,0\r\n15809823,Male,26,15000,0\r\n15699284,Female,29,28000,0\r\n15786993,Female,29,83000,0\r\n15709441,Female,35,44000,0\r\n15710257,Female,35,25000,0\r\n15582492,Male,28,123000,1\r\n15575694,Male,35,73000,0\r\n15756820,Female,28,37000,0\r\n15766289,Male,27,88000,0\r\n15593014,Male,28,59000,0\r\n15584545,Female,32,86000,0\r\n15675949,Female,33,149000,1\r\n15672091,Female,19,21000,0\r\n15801658,Male,21,72000,0\r\n15706185,Female,26,35000,0\r\n15789863,Male,27,89000,0\r\n15720943,Male,26,86000,0\r\n15697997,Female,38,80000,0\r\n15665416,Female,39,71000,0\r\n15660200,Female,37,71000,0\r\n15619653,Male,38,61000,0\r\n15773447,Male,37,55000,0\r\n15739160,Male,42,80000,0\r\n15689237,Male,40,57000,0\r\n15679297,Male,35,75000,0\r\n15591433,Male,36,52000,0\r\n15642725,Male,40,59000,0\r\n15701962,Male,41,59000,0\r\n15811613,Female,36,75000,0\r\n15741049,Male,37,72000,0\r\n15724423,Female,40,75000,0\r\n15574305,Male,35,53000,0\r\n15678168,Female,41,51000,0\r\n15697020,Female,39,61000,0\r\n15610801,Male,42,65000,0\r\n15745232,Male,26,32000,0\r\n15722758,Male,30,17000,0\r\n15792102,Female,26,84000,0\r\n15675185,Male,31,58000,0\r\n15801247,Male,33,31000,0\r\n15725660,Male,30,87000,0\r\n15638963,Female,21,68000,0\r\n15800061,Female,28,55000,0\r\n15578006,Male,23,63000,0\r\n15668504,Female,20,82000,0\r\n15687491,Male,30,107000,1\r\n15610403,Female,28,59000,0\r\n15741094,Male,19,25000,0\r\n15807909,Male,19,85000,0\r\n15666141,Female,18,68000,0\r\n15617134,Male,35,59000,0\r\n15783029,Male,30,89000,0\r\n15622833,Female,34,25000,0\r\n15746422,Female,24,89000,0\r\n15750839,Female,27,96000,1\r\n15749130,Female,41,30000,0\r\n15779862,Male,29,61000,0\r\n15767871,Male,20,74000,0\r\n15679651,Female,26,15000,0\r\n15576219,Male,41,45000,0\r\n15699247,Male,31,76000,0\r\n15619087,Female,36,50000,0\r\n15605327,Male,40,47000,0\r\n15610140,Female,31,15000,0\r\n15791174,Male,46,59000,0\r\n15602373,Male,29,75000,0\r\n15762605,Male,26,30000,0\r\n15598840,Female,32,135000,1\r\n15744279,Male,32,100000,1\r\n15670619,Male,25,90000,0\r\n15599533,Female,37,33000,0\r\n15757837,Male,35,38000,0\r\n15697574,Female,33,69000,0\r\n15578738,Female,18,86000,0\r\n15762228,Female,22,55000,0\r\n15614827,Female,35,71000,0\r\n15789815,Male,29,148000,1\r\n15579781,Female,29,47000,0\r\n15587013,Male,21,88000,0\r\n15570932,Male,34,115000,0\r\n15794661,Female,26,118000,0\r\n15581654,Female,34,43000,0\r\n15644296,Female,34,72000,0\r\n15614420,Female,23,28000,0\r\n15609653,Female,35,47000,0\r\n15594577,Male,25,22000,0\r\n15584114,Male,24,23000,0\r\n15673367,Female,31,34000,0\r\n15685576,Male,26,16000,0\r\n15774727,Female,31,71000,0\r\n15694288,Female,32,117000,1\r\n15603319,Male,33,43000,0\r\n15759066,Female,33,60000,0\r\n15814816,Male,31,66000,0\r\n15724402,Female,20,82000,0\r\n15571059,Female,33,41000,0\r\n15674206,Male,35,72000,0\r\n15715160,Male,28,32000,0\r\n15730448,Male,24,84000,0\r\n15662067,Female,19,26000,0\r\n15779581,Male,29,43000,0\r\n15662901,Male,19,70000,0\r\n15689751,Male,28,89000,0\r\n15667742,Male,34,43000,0\r\n15738448,Female,30,79000,0\r\n15680243,Female,20,36000,0\r\n15745083,Male,26,80000,0\r\n15708228,Male,35,22000,0\r\n15628523,Male,35,39000,0\r\n15708196,Male,49,74000,0\r\n15735549,Female,39,134000,1\r\n15809347,Female,41,71000,0\r\n15660866,Female,58,101000,1\r\n15766609,Female,47,47000,0\r\n15654230,Female,55,130000,1\r\n15794566,Female,52,114000,0\r\n15800890,Female,40,142000,1\r\n15697424,Female,46,22000,0\r\n15724536,Female,48,96000,1\r\n15735878,Male,52,150000,1\r\n15707596,Female,59,42000,0\r\n15657163,Male,35,58000,0\r\n15622478,Male,47,43000,0\r\n15779529,Female,60,108000,1\r\n15636023,Male,49,65000,0\r\n15582066,Male,40,78000,0\r\n15666675,Female,46,96000,0\r\n15732987,Male,59,143000,1\r\n15789432,Female,41,80000,0\r\n15663161,Male,35,91000,1\r\n15694879,Male,37,144000,1\r\n15593715,Male,60,102000,1\r\n15575002,Female,35,60000,0\r\n15622171,Male,37,53000,0\r\n15795224,Female,36,126000,1\r\n15685346,Male,56,133000,1\r\n15691808,Female,40,72000,0\r\n15721007,Female,42,80000,1\r\n15794253,Female,35,147000,1\r\n15694453,Male,39,42000,0\r\n15813113,Male,40,107000,1\r\n15614187,Male,49,86000,1\r\n15619407,Female,38,112000,0\r\n15646227,Male,46,79000,1\r\n15660541,Male,40,57000,0\r\n15753874,Female,37,80000,0\r\n15617877,Female,46,82000,0\r\n15772073,Female,53,143000,1\r\n15701537,Male,42,149000,1\r\n15736228,Male,38,59000,0\r\n15780572,Female,50,88000,1\r\n15769596,Female,56,104000,1\r\n15586996,Female,41,72000,0\r\n15722061,Female,51,146000,1\r\n15638003,Female,35,50000,0\r\n15775590,Female,57,122000,1\r\n15730688,Male,41,52000,0\r\n15753102,Female,35,97000,1\r\n15810075,Female,44,39000,0\r\n15723373,Male,37,52000,0\r\n15795298,Female,48,134000,1\r\n15584320,Female,37,146000,1\r\n15724161,Female,50,44000,0\r\n15750056,Female,52,90000,1\r\n15609637,Female,41,72000,0\r\n15794493,Male,40,57000,0\r\n15569641,Female,58,95000,1\r\n15815236,Female,45,131000,1\r\n15811177,Female,35,77000,0\r\n15680587,Male,36,144000,1\r\n15672821,Female,55,125000,1\r\n15767681,Female,35,72000,0\r\n15600379,Male,48,90000,1\r\n15801336,Female,42,108000,1\r\n15721592,Male,40,75000,0\r\n15581282,Male,37,74000,0\r\n15746203,Female,47,144000,1\r\n15583137,Male,40,61000,0\r\n15680752,Female,43,133000,0\r\n15688172,Female,59,76000,1\r\n15791373,Male,60,42000,1\r\n15589449,Male,39,106000,1\r\n15692819,Female,57,26000,1\r\n15727467,Male,57,74000,1\r\n15734312,Male,38,71000,0\r\n15764604,Male,49,88000,1\r\n15613014,Female,52,38000,1\r\n15759684,Female,50,36000,1\r\n15609669,Female,59,88000,1\r\n15685536,Male,35,61000,0\r\n15750447,Male,37,70000,1\r\n15663249,Female,52,21000,1\r\n15638646,Male,48,141000,0\r\n15734161,Female,37,93000,1\r\n15631070,Female,37,62000,0\r\n15761950,Female,48,138000,1\r\n15649668,Male,41,79000,0\r\n15713912,Female,37,78000,1\r\n15586757,Male,39,134000,1\r\n15596522,Male,49,89000,1\r\n15625395,Male,55,39000,1\r\n15760570,Male,37,77000,0\r\n15566689,Female,35,57000,0\r\n15725794,Female,36,63000,0\r\n15673539,Male,42,73000,1\r\n15705298,Female,43,112000,1\r\n15675791,Male,45,79000,0\r\n15747043,Male,46,117000,1\r\n15736397,Female,58,38000,1\r\n15678201,Male,48,74000,1\r\n15720745,Female,37,137000,1\r\n15637593,Male,37,79000,1\r\n15598070,Female,40,60000,0\r\n15787550,Male,42,54000,0\r\n15603942,Female,51,134000,0\r\n15733973,Female,47,113000,1\r\n15596761,Male,36,125000,1\r\n15652400,Female,38,50000,0\r\n15717893,Female,42,70000,0\r\n15622585,Male,39,96000,1\r\n15733964,Female,38,50000,0\r\n15753861,Female,49,141000,1\r\n15747097,Female,39,79000,0\r\n15594762,Female,39,75000,1\r\n15667417,Female,54,104000,1\r\n15684861,Male,35,55000,0\r\n15742204,Male,45,32000,1\r\n15623502,Male,36,60000,0\r\n15774872,Female,52,138000,1\r\n15611191,Female,53,82000,1\r\n15674331,Male,41,52000,0\r\n15619465,Female,48,30000,1\r\n15575247,Female,48,131000,1\r\n15695679,Female,41,60000,0\r\n15713463,Male,41,72000,0\r\n15785170,Female,42,75000,0\r\n15796351,Male,36,118000,1\r\n15639576,Female,47,107000,1\r\n15693264,Male,38,51000,0\r\n15589715,Female,48,119000,1\r\n15769902,Male,42,65000,0\r\n15587177,Male,40,65000,0\r\n15814553,Male,57,60000,1\r\n15601550,Female,36,54000,0\r\n15664907,Male,58,144000,1\r\n15612465,Male,35,79000,0\r\n15810800,Female,38,55000,0\r\n15665760,Male,39,122000,1\r\n15588080,Female,53,104000,1\r\n15776844,Male,35,75000,0\r\n15717560,Female,38,65000,0\r\n15629739,Female,47,51000,1\r\n15729908,Male,47,105000,1\r\n15716781,Female,41,63000,0\r\n15646936,Male,53,72000,1\r\n15768151,Female,54,108000,1\r\n15579212,Male,39,77000,0\r\n15721835,Male,38,61000,0\r\n15800515,Female,38,113000,1\r\n15591279,Male,37,75000,0\r\n15587419,Female,42,90000,1\r\n15750335,Female,37,57000,0\r\n15699619,Male,36,99000,1\r\n15606472,Male,60,34000,1\r\n15778368,Male,54,70000,1\r\n15671387,Female,41,72000,0\r\n15573926,Male,40,71000,1\r\n15709183,Male,42,54000,0\r\n15577514,Male,43,129000,1\r\n15778830,Female,53,34000,1\r\n15768072,Female,47,50000,1\r\n15768293,Female,42,79000,0\r\n15654456,Male,42,104000,1\r\n15807525,Female,59,29000,1\r\n15574372,Female,58,47000,1\r\n15671249,Male,46,88000,1\r\n15779744,Male,38,71000,0\r\n15624755,Female,54,26000,1\r\n15611430,Female,60,46000,1\r\n15774744,Male,60,83000,1\r\n15629885,Female,39,73000,0\r\n15708791,Male,59,130000,1\r\n15793890,Female,37,80000,0\r\n15646091,Female,46,32000,1\r\n15596984,Female,46,74000,0\r\n15800215,Female,42,53000,0\r\n15577806,Male,41,87000,1\r\n15749381,Female,58,23000,1\r\n15683758,Male,42,64000,0\r\n15670615,Male,48,33000,1\r\n15715622,Female,44,139000,1\r\n15707634,Male,49,28000,1\r\n15806901,Female,57,33000,1\r\n15775335,Male,56,60000,1\r\n15724150,Female,49,39000,1\r\n15627220,Male,39,71000,0\r\n15672330,Male,47,34000,1\r\n15668521,Female,48,35000,1\r\n15807837,Male,48,33000,1\r\n15592570,Male,47,23000,1\r\n15748589,Female,45,45000,1\r\n15635893,Male,60,42000,1\r\n15757632,Female,39,59000,0\r\n15691863,Female,46,41000,1\r\n15706071,Male,51,23000,1\r\n15654296,Female,50,20000,1\r\n15755018,Male,36,33000,0\r\n15594041,Female,49,36000,1"
  },
  {
    "path": "datasets/readme.md",
    "content": "Day wise Dataset Used in Code\n"
  },
  {
    "path": "datasets/studentscores.csv",
    "content": "Hours,Scores\r\n2.5,21\r\n5.1,47\r\n3.2,27\r\n8.5,75\r\n3.5,30\r\n1.5,20\r\n9.2,88\r\n5.5,60\r\n8.3,81\r\n2.7,25\r\n7.7,85\r\n5.9,62\r\n4.5,41\r\n3.3,42\r\n1.1,17\r\n8.9,95\r\n2.5,30\r\n1.9,24\r\n6.1,67\r\n7.4,69\r\n2.7,30\r\n4.8,54\r\n3.8,35\r\n6.9,76\r\n7.8,86\r\n"
  }
]